BUG #16045: vacuum_db crash and illegal memory alloc after pg_upgrade from PG11 to PG12
The following bug has been logged on the website:
Bug reference: 16045
Logged by: Hans Buschmann
Email address: buschmann@nidsa.net
PostgreSQL version: 12.0
Operating system: Windows 10 64bit
Description:
I just did a pg_upgrade from pg 11.5 to pg 12.0 on my development machine
under Windows 64bit (both distributions from EDB).
cpsdb=# select version ();
version
------------------------------------------------------------
PostgreSQL 12.0, compiled by Visual C++ build 1914, 64-bit
(1 row)
The pg_upgrade with --link went flawlessly, I started (only!) the new server
12.0 and could connect and access individual databases.
As recommended by the resulting analyze_new_cluster.bat I tried a full
vacuumdb with:
"N:/pgsql/bin/vacuumdb" -U postgres --all --analyze-only
which crashed with
vacuumdb: vacuuming database "cpsdb"
vacuumdb: error: vacuuming of table "admin.q_tbl_archiv" in database "cpsdb"
failed: ERROR: compressed data is corrupted
I connected to the database through pgsql and looked at the table
"admin.q_tbl_archiv"
cpsdb=# \d+ q_tbl_archiv;
Table
"admin.q_tbl_archiv"
Column | Type | Collation |
Nullable | Default | Storage | Stats target | Description
------------------+------------------------------------+-----------+----------+---------+----------+--------------+-------------
table_name | information_schema.sql_identifier | |
| | plain | |
column_name | information_schema.sql_identifier | |
| | plain | |
ordinal_position | information_schema.cardinal_number | |
| | plain | |
col_qualifier | text | |
| | extended | |
id_column | information_schema.sql_identifier | |
| | plain | |
id_default | information_schema.character_data | |
| | extended | |
Access method: heap
When trying to select * from q_tbl_archiv I got:
cpsdb=# select * from q_tbl_archiv;
ERROR: invalid memory alloc request size 18446744073709551613
This table was created a long time back under 9.5 or 9.6 with the (here
truncated) following command:
create table q_tbl_archiv as
with
qseason as (
select table_name,column_name, ordinal_position
,replace(column_name,'_season','') as col_qualifier
-- ,'id_'||replace(column_name,'_season','') as id_column
from information_schema.columns
where
column_name like '%_season'
and ordinal_position < 10
and table_name in (
'table1'
,'table2'
-- here truncated:
-- ... (here where all table of mine having columns like xxx_season)
-- to reproduce, change to your own tablenames in a test database
)
order by table_name
)
select qs.*,c.column_name as id_column, c.column_default as id_default
from
qseason qs
left join information_schema.columns c on c.table_name=qs.table_name and
c.column_name like 'id_%'
;
Until now this table was always restored without error by migrating to a new
major version through pg_dump/initdb/pr_restore.
To verify the integrity of the table I restored the dump taken under pg_dump
from pg 11.5 just before the pg_upgrade to another machine.
The restore and analyze went OK and select * from q_tbl_archiv showed all
tuples, eg (edited):
cpsdb_dev=# select * from q_tbl_archiv;
table_name | column_name | ordinal_position | col_qualifier
| id_column | id_default
--------------------------+--------------+------------------+---------------+-----------+----------------------------------------------------------
table1 | chm_season | 2 | chm
| |
table2 | cs_season | 2 | cs
| id_cs | nextval('table2_id_cs_seq'::regclass)
...
In conclusion, this seems to me like an error/omission of pg_upgrade.
It seems to handle these specially derived tables from information_schema
not correctly, resulting in failures of the upgraded database.
For me, this error is not so crucial, because this table is only used for
administrative purposes and can easily be restored from backup.
But I want to share my findings for the sake of other users of pg_upgrade.
Thanks for investigating!
Hans Buschmann
On Tue, Oct 08, 2019 at 05:08:53PM +0000, PG Bug reporting form wrote:
The following bug has been logged on the website:
Bug reference: 16045
Logged by: Hans Buschmann
Email address: buschmann@nidsa.net
PostgreSQL version: 12.0
Operating system: Windows 10 64bit
Description:I just did a pg_upgrade from pg 11.5 to pg 12.0 on my development machine
under Windows 64bit (both distributions from EDB).cpsdb=# select version ();
version
------------------------------------------------------------
PostgreSQL 12.0, compiled by Visual C++ build 1914, 64-bit
(1 row)The pg_upgrade with --link went flawlessly, I started (only!) the new server
12.0 and could connect and access individual databases.As recommended by the resulting analyze_new_cluster.bat I tried a full
vacuumdb with:"N:/pgsql/bin/vacuumdb" -U postgres --all --analyze-only
which crashed with
vacuumdb: vacuuming database "cpsdb"
vacuumdb: error: vacuuming of table "admin.q_tbl_archiv" in database "cpsdb"
failed: ERROR: compressed data is corruptedI connected to the database through pgsql and looked at the table
"admin.q_tbl_archiv"cpsdb=# \d+ q_tbl_archiv;
Table
"admin.q_tbl_archiv"
Column | Type | Collation |
Nullable | Default | Storage | Stats target | Description
------------------+------------------------------------+-----------+----------+---------+----------+--------------+-------------
table_name | information_schema.sql_identifier | |
| | plain | |
column_name | information_schema.sql_identifier | |
| | plain | |
ordinal_position | information_schema.cardinal_number | |
| | plain | |
col_qualifier | text | |
| | extended | |
id_column | information_schema.sql_identifier | |
| | plain | |
id_default | information_schema.character_data | |
| | extended | |
Access method: heapWhen trying to select * from q_tbl_archiv I got:
cpsdb=# select * from q_tbl_archiv;
ERROR: invalid memory alloc request size 18446744073709551613This table was created a long time back under 9.5 or 9.6 with the (here
truncated) following command:create table q_tbl_archiv as
with
qseason as (
select table_name,column_name, ordinal_position
,replace(column_name,'_season','') as col_qualifier
-- ,'id_'||replace(column_name,'_season','') as id_column
from information_schema.columns
where
column_name like '%_season'
and ordinal_position < 10
and table_name in (
'table1'
,'table2'
-- here truncated:
-- ... (here where all table of mine having columns like xxx_season)
-- to reproduce, change to your own tablenames in a test database
)
order by table_name
)
select qs.*,c.column_name as id_column, c.column_default as id_default
from
qseason qs
left join information_schema.columns c on c.table_name=qs.table_name and
c.column_name like 'id_%'
;Until now this table was always restored without error by migrating to a new
major version through pg_dump/initdb/pr_restore.To verify the integrity of the table I restored the dump taken under pg_dump
from pg 11.5 just before the pg_upgrade to another machine.The restore and analyze went OK and select * from q_tbl_archiv showed all
tuples, eg (edited):cpsdb_dev=# select * from q_tbl_archiv;
table_name | column_name | ordinal_position | col_qualifier
| id_column | id_default
--------------------------+--------------+------------------+---------------+-----------+----------------------------------------------------------
table1 | chm_season | 2 | chm
| |
table2 | cs_season | 2 | cs
| id_cs | nextval('table2_id_cs_seq'::regclass)
...In conclusion, this seems to me like an error/omission of pg_upgrade.
There's clearly something bad happening. It's a bit strange, though. Had
this been a data corruption issue, I'd expect the pg_dump to fail too,
but it succeeds.
It seems to handle these specially derived tables from information_schema
not correctly, resulting in failures of the upgraded database.
Well, I don't see how that should make any difference. It's a CTAS and
that should create a regular table, that's not an issue. I wonder if
there were some changes to the data types involved, but that would be
essentially a break in on-disk format and we're careful about not doing
that ...
For me, this error is not so crucial, because this table is only used for
administrative purposes and can easily be restored from backup.But I want to share my findings for the sake of other users of pg_upgrade.
OK, thanks. Could you maybe set
log_error_verbosity = verbose
before invoking the vacuum (you can set that in that session)? That
should give us more details about where exactly the error is triggered.
Even better, if you could attach a debugger to the session, set
breakpoints on locations triggering 'invalid memory alloc request size'
and then show the backtrace (obviously, that's more complicated).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
FWIW I can reproduce this - it's enough to do this on the 11 cluster
create table q_tbl_archiv as
with
qseason as (
select table_name,column_name, ordinal_position
,replace(column_name,'_season','') as col_qualifier
-- ,'id_'||replace(column_name,'_season','') as id_column
from information_schema.columns
order by table_name
)
select qs.*,c.column_name as id_column, c.column_default as id_default
from
qseason qs
left join information_schema.columns c on c.table_name=qs.table_name and
c.column_name like 'id_%';
and then
analyze q_tbl_archiv
which produces backtrace like this:
No symbol "stats" in current context.
(gdb) bt
#0 0x0000746095262951 in __memmove_avx_unaligned_erms () from /lib64/libc.so.6
#1 0x0000000000890a8e in varstrfastcmp_locale (a1p=0x17716b4 "per_language\a", len1=<optimized out>, a2p=0x176af28 '\177' <repeats 136 times>, "\021\004", len2=-4, ssup=<optimized out>, ssup=<optimized out>) at varlena.c:2320
#2 0x0000000000890cb1 in varlenafastcmp_locale (x=24581808, y=24555300, ssup=0x7ffc649463f0) at varlena.c:2219
#3 0x00000000005b73b4 in ApplySortComparator (ssup=0x7ffc649463f0, isNull2=false, datum2=<optimized out>, isNull1=false, datum1=<optimized out>) at ../../../src/include/utils/sortsupport.h:224
#4 compare_scalars (a=<optimized out>, b=<optimized out>, arg=0x7ffc649463e0) at analyze.c:2700
#5 0x00000000008f9953 in qsort_arg (a=a@entry=0x178fdc0, n=<optimized out>, n@entry=2158, es=es@entry=16, cmp=cmp@entry=0x5b7390 <compare_scalars>, arg=arg@entry=0x7ffc649463e0) at qsort_arg.c:140
#6 0x00000000005b86a6 in compute_scalar_stats (stats=0x176a208, fetchfunc=<optimized out>, samplerows=<optimized out>, totalrows=2158) at analyze.c:2273
#7 0x00000000005b9d95 in do_analyze_rel (onerel=onerel@entry=0x74608c00d3e8, params=params@entry=0x7ffc64946970, va_cols=va_cols@entry=0x0, acquirefunc=<optimized out>, relpages=22, inh=inh@entry=false, in_outer_xact=false, elevel=13)
at analyze.c:529
#8 0x00000000005bb2c9 in analyze_rel (relid=<optimized out>, relation=<optimized out>, params=params@entry=0x7ffc64946970, va_cols=0x0, in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:260
#9 0x000000000062c7b0 in vacuum (relations=0x1727120, params=params@entry=0x7ffc64946970, bstrategy=<optimized out>, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:413
#10 0x000000000062cd49 in ExecVacuum (pstate=pstate@entry=0x16c9518, vacstmt=vacstmt@entry=0x16a82b8, isTopLevel=isTopLevel@entry=true) at vacuum.c:199
#11 0x00000000007a6d64 in standard_ProcessUtility (pstmt=0x16a8618, queryString=0x16a77a8 "", context=<optimized out>, params=0x0, queryEnv=0x0, dest=0x16a8710, completionTag=0x7ffc64946cb0 "") at utility.c:670
#12 0x00000000007a4006 in PortalRunUtility (portal=0x170f368, pstmt=0x16a8618, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x16a8710, completionTag=0x7ffc64946cb0 "") at pquery.c:1175
#13 0x00000000007a4b61 in PortalRunMulti (portal=portal@entry=0x170f368, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,
completionTag=completionTag@entry=0x7ffc64946cb0 "") at pquery.c:1321
#14 0x00000000007a5864 in PortalRun (portal=portal@entry=0x170f368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,
completionTag=0x7ffc64946cb0 "") at pquery.c:796
#15 0x00000000007a174e in exec_simple_query (query_string=0x16a77a8 "") at postgres.c:1215
Looking at compute_scalar_stats, the "stats" parameter does not seem
particularly healthy:
(gdb) p *stats
$3 = {attr = 0x10, attrtypid = 12, attrtypmod = 0, attrtype = 0x1762e00, attrcollid = 356, anl_context = 0x7f7f7f7e00000000, compute_stats = 0x100, minrows = 144, extra_data = 0x1762e00, stats_valid = false, stanullfrac = 0,
stawidth = 0, stadistinct = 0, stakind = {0, 0, 0, 0, 0}, staop = {0, 0, 0, 0, 0}, stacoll = {0, 0, 0, 0, 0}, numnumbers = {0, 0, 0, 0, 0}, stanumbers = {0x0, 0x0, 0x0, 0x0, 0x0}, numvalues = {0, 0, 0, 0, 2139062142}, stavalues = {
0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f}, statypid = {2139062143, 2139062143, 2139062143, 2139062143, 2139062143}, statyplen = {32639, 32639, 32639, 32639, 32639},
statypbyval = {127, 127, 127, 127, 127}, statypalign = "\177\177\177\177\177", tupattnum = 2139062143, rows = 0x7f7f7f7f7f7f7f7f, tupDesc = 0x7f7f7f7f7f7f7f7f, exprvals = 0x8, exprnulls = 0x4, rowstride = 24522240}
Not sure about the root cause yet.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
FWIW I can reproduce this - it's enough to do this on the 11 cluster
I failed to reproduce any problem from your example, but I was trying
in C locale on a Linux machine. What environment are you testing?
regards, tom lane
On Wed, Oct 09, 2019 at 03:59:07PM +0200, Tomas Vondra wrote:
FWIW I can reproduce this - it's enough to do this on the 11 cluster
create table q_tbl_archiv as
with
qseason as (
select table_name,column_name, ordinal_position
,replace(column_name,'_season','') as col_qualifier
-- ,'id_'||replace(column_name,'_season','') as id_column
from information_schema.columns
order by table_name
)
select qs.*,c.column_name as id_column, c.column_default as id_default
from
qseason qs
left join information_schema.columns c on c.table_name=qs.table_name and
c.column_name like 'id_%';and then
analyze q_tbl_archiv
which produces backtrace like this:
No symbol "stats" in current context.
(gdb) bt
#0 0x0000746095262951 in __memmove_avx_unaligned_erms () from /lib64/libc.so.6
#1 0x0000000000890a8e in varstrfastcmp_locale (a1p=0x17716b4 "per_language\a", len1=<optimized out>, a2p=0x176af28 '\177' <repeats 136 times>, "\021\004", len2=-4, ssup=<optimized out>, ssup=<optimized out>) at varlena.c:2320
#2 0x0000000000890cb1 in varlenafastcmp_locale (x=24581808, y=24555300, ssup=0x7ffc649463f0) at varlena.c:2219
#3 0x00000000005b73b4 in ApplySortComparator (ssup=0x7ffc649463f0, isNull2=false, datum2=<optimized out>, isNull1=false, datum1=<optimized out>) at ../../../src/include/utils/sortsupport.h:224
#4 compare_scalars (a=<optimized out>, b=<optimized out>, arg=0x7ffc649463e0) at analyze.c:2700
#5 0x00000000008f9953 in qsort_arg (a=a@entry=0x178fdc0, n=<optimized out>, n@entry=2158, es=es@entry=16, cmp=cmp@entry=0x5b7390 <compare_scalars>, arg=arg@entry=0x7ffc649463e0) at qsort_arg.c:140
#6 0x00000000005b86a6 in compute_scalar_stats (stats=0x176a208, fetchfunc=<optimized out>, samplerows=<optimized out>, totalrows=2158) at analyze.c:2273
#7 0x00000000005b9d95 in do_analyze_rel (onerel=onerel@entry=0x74608c00d3e8, params=params@entry=0x7ffc64946970, va_cols=va_cols@entry=0x0, acquirefunc=<optimized out>, relpages=22, inh=inh@entry=false, in_outer_xact=false, elevel=13)
at analyze.c:529
#8 0x00000000005bb2c9 in analyze_rel (relid=<optimized out>, relation=<optimized out>, params=params@entry=0x7ffc64946970, va_cols=0x0, in_outer_xact=<optimized out>, bstrategy=<optimized out>) at analyze.c:260
#9 0x000000000062c7b0 in vacuum (relations=0x1727120, params=params@entry=0x7ffc64946970, bstrategy=<optimized out>, bstrategy@entry=0x0, isTopLevel=isTopLevel@entry=true) at vacuum.c:413
#10 0x000000000062cd49 in ExecVacuum (pstate=pstate@entry=0x16c9518, vacstmt=vacstmt@entry=0x16a82b8, isTopLevel=isTopLevel@entry=true) at vacuum.c:199
#11 0x00000000007a6d64 in standard_ProcessUtility (pstmt=0x16a8618, queryString=0x16a77a8 "", context=<optimized out>, params=0x0, queryEnv=0x0, dest=0x16a8710, completionTag=0x7ffc64946cb0 "") at utility.c:670
#12 0x00000000007a4006 in PortalRunUtility (portal=0x170f368, pstmt=0x16a8618, isTopLevel=<optimized out>, setHoldSnapshot=<optimized out>, dest=0x16a8710, completionTag=0x7ffc64946cb0 "") at pquery.c:1175
#13 0x00000000007a4b61 in PortalRunMulti (portal=portal@entry=0x170f368, isTopLevel=isTopLevel@entry=true, setHoldSnapshot=setHoldSnapshot@entry=false, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,
completionTag=completionTag@entry=0x7ffc64946cb0 "") at pquery.c:1321
#14 0x00000000007a5864 in PortalRun (portal=portal@entry=0x170f368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x16a8710, altdest=altdest@entry=0x16a8710,
completionTag=0x7ffc64946cb0 "") at pquery.c:796
#15 0x00000000007a174e in exec_simple_query (query_string=0x16a77a8 "") at postgres.c:1215Looking at compute_scalar_stats, the "stats" parameter does not seem
particularly healthy:(gdb) p *stats
$3 = {attr = 0x10, attrtypid = 12, attrtypmod = 0, attrtype = 0x1762e00, attrcollid = 356, anl_context = 0x7f7f7f7e00000000, compute_stats = 0x100, minrows = 144, extra_data = 0x1762e00, stats_valid = false, stanullfrac = 0,
stawidth = 0, stadistinct = 0, stakind = {0, 0, 0, 0, 0}, staop = {0, 0, 0, 0, 0}, stacoll = {0, 0, 0, 0, 0}, numnumbers = {0, 0, 0, 0, 0}, stanumbers = {0x0, 0x0, 0x0, 0x0, 0x0}, numvalues = {0, 0, 0, 0, 2139062142}, stavalues = {
0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f, 0x7f7f7f7f7f7f7f7f}, statypid = {2139062143, 2139062143, 2139062143, 2139062143, 2139062143}, statyplen = {32639, 32639, 32639, 32639, 32639},
statypbyval = {127, 127, 127, 127, 127}, statypalign = "\177\177\177\177\177", tupattnum = 2139062143, rows = 0x7f7f7f7f7f7f7f7f, tupDesc = 0x7f7f7f7f7f7f7f7f, exprvals = 0x8, exprnulls = 0x4, rowstride = 24522240}Not sure about the root cause yet.
OK, a couple more observations - the table schema looks like this:
Table "public.q_tbl_archiv"
Column | Type | Collation | Nullable | Default
------------------+------------------------------------+-----------+----------+---------
table_name | information_schema.sql_identifier | | |
column_name | information_schema.sql_identifier | | |
ordinal_position | information_schema.cardinal_number | | |
col_qualifier | text | | |
id_column | information_schema.sql_identifier | | |
id_default | information_schema.character_data | | |
and I can succesfully do this:
test=# analyze q_tbl_archiv (table_name, column_name, ordinal_position, id_column, id_default);
ANALYZE
but as soon as I include the col_qualifier column, it fails:
test=# analyze q_tbl_archiv (table_name, column_name, ordinal_position, id_column, id_default, col_qualifier);
ERROR: compressed data is corrupted
But it fails differently (with the segfault) when analyzing just the one
column:
test=# analyze q_tbl_archiv (col_qualifier);
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
Moreover, there are some other interesting failures - I can do
select max(table_name) from q_tbl_archiv;
select max(column_name) from q_tbl_archiv;
select max(ordinal_position) from q_tbl_archiv;
but as soon as I try doing that with col_qualifier, it crashes and
burns:
select max(col_qualifier) from q_tbl_archiv;
The backtrace is rather strange in this case (a lot of missing calls,
etc.). However, when called for the next two columns, it still crashes,
but the backtraces look somewhat saner:
select max(id_column) from q_tbl_archiv;
Program received signal SIGSEGV, Segmentation fault.
0x00007db3186c6617 in __strlen_avx2 () from /lib64/libc.so.6
(gdb) bt
#0 0x00007db3186c6617 in __strlen_avx2 () from /lib64/libc.so.6
#1 0x0000000000894ced in cstring_to_text (s=0x7db32ce38935 <error: Cannot access memory at address 0x7db32ce38935>) at varlena.c:173
#2 name_text (fcinfo=<optimized out>) at varlena.c:3573
#3 0x000000000063860d in ExecInterpExpr (state=0x1136900, econtext=0x1135128, isnull=<optimized out>) at execExprInterp.c:649
#4 0x000000000064f699 in ExecEvalExprSwitchContext (isNull=0x7ffcfd8f3b2f, econtext=<optimized out>, state=<optimized out>) at ../../../src/include/executor/executor.h:307
#5 advance_aggregates (aggstate=0x1134ef0, aggstate=0x1134ef0) at nodeAgg.c:679
#6 agg_retrieve_direct (aggstate=0x1134ef0) at nodeAgg.c:1847
#7 ExecAgg (pstate=0x1134ef0) at nodeAgg.c:1572
#8 0x000000000063b58b in ExecProcNode (node=0x1134ef0) at ../../../src/include/executor/executor.h:239
#9 ExecutePlan (execute_once=<optimized out>, dest=0x1144248, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1134ef0, estate=0x1134c98)
at execMain.c:1646
#10 standard_ExecutorRun (queryDesc=0x1094f18, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364
#11 0x00000000007a43cc in PortalRunSelect (portal=0x10da368, forward=<optimized out>, count=0, dest=<optimized out>) at pquery.c:929
#12 0x00000000007a5958 in PortalRun (portal=portal@entry=0x10da368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x1144248, altdest=altdest@entry=0x1144248,
completionTag=0x7ffcfd8f3db0 "") at pquery.c:770
#13 0x00000000007a177e in exec_simple_query (query_string=0x10727a8 "select max(id_column) from q_tbl_archiv ;") at postgres.c:1215
#14 0x00000000007a2f3f in PostgresMain (argc=<optimized out>, argv=argv@entry=0x109e400, dbname=<optimized out>, username=<optimized out>) at postgres.c:4236
#15 0x00000000007237ce in BackendRun (port=0x1097c30, port=0x1097c30) at postmaster.c:4437
#16 BackendStartup (port=0x1097c30) at postmaster.c:4128
#17 ServerLoop () at postmaster.c:1704
#18 0x000000000072458e in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x106c350) at postmaster.c:1377
#19 0x000000000047d101 in main (argc=3, argv=0x106c350) at main.c:228
select max(id_default) from q_tbl_archiv;
Program received signal SIGABRT, Aborted.
0x00007db3185a1e35 in raise () from /lib64/libc.so.6
(gdb) bt
#0 0x00007db3185a1e35 in raise () from /lib64/libc.so.6
#1 0x00007db31858c895 in abort () from /lib64/libc.so.6
#2 0x00000000008b4470 in ExceptionalCondition (conditionName=conditionName@entry=0xabe49e "1", errorType=errorType@entry=0x907128 "unrecognized TOAST vartag", fileName=fileName@entry=0xa4965b "execTuples.c",
lineNumber=lineNumber@entry=971) at assert.c:54
#3 0x00000000006466d3 in slot_deform_heap_tuple (natts=6, offp=0x1135170, tuple=<optimized out>, slot=0x1135128) at execTuples.c:985
#4 tts_buffer_heap_getsomeattrs (slot=0x1135128, natts=<optimized out>) at execTuples.c:676
#5 0x00000000006489fc in slot_getsomeattrs_int (slot=slot@entry=0x1135128, attnum=6) at execTuples.c:1877
#6 0x00000000006379a3 in slot_getsomeattrs (attnum=<optimized out>, slot=0x1135128) at ../../../src/include/executor/tuptable.h:345
#7 ExecInterpExpr (state=0x11364b0, econtext=0x1134cd8, isnull=<optimized out>) at execExprInterp.c:441
#8 0x000000000064f699 in ExecEvalExprSwitchContext (isNull=0x7ffcfd8f3b2f, econtext=<optimized out>, state=<optimized out>) at ../../../src/include/executor/executor.h:307
#9 advance_aggregates (aggstate=0x1134aa0, aggstate=0x1134aa0) at nodeAgg.c:679
#10 agg_retrieve_direct (aggstate=0x1134aa0) at nodeAgg.c:1847
#11 ExecAgg (pstate=0x1134aa0) at nodeAgg.c:1572
#12 0x000000000063b58b in ExecProcNode (node=0x1134aa0) at ../../../src/include/executor/executor.h:239
#13 ExecutePlan (execute_once=<optimized out>, dest=0x11439d8, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0x1134aa0, estate=0x1134848)
at execMain.c:1646
#14 standard_ExecutorRun (queryDesc=0x1094f18, direction=<optimized out>, count=0, execute_once=<optimized out>) at execMain.c:364
#15 0x00000000007a43cc in PortalRunSelect (portal=0x10da368, forward=<optimized out>, count=0, dest=<optimized out>) at pquery.c:929
#16 0x00000000007a5958 in PortalRun (portal=portal@entry=0x10da368, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=run_once@entry=true, dest=dest@entry=0x11439d8, altdest=altdest@entry=0x11439d8,
completionTag=0x7ffcfd8f3db0 "") at pquery.c:770
#17 0x00000000007a177e in exec_simple_query (query_string=0x10727a8 "select max(id_default) from q_tbl_archiv ;") at postgres.c:1215
#18 0x00000000007a2f3f in PostgresMain (argc=<optimized out>, argv=argv@entry=0x109e4f0, dbname=<optimized out>, username=<optimized out>) at postgres.c:4236
#19 0x00000000007237ce in BackendRun (port=0x10976f0, port=0x10976f0) at postmaster.c:4437
#20 BackendStartup (port=0x10976f0) at postmaster.c:4128
#21 ServerLoop () at postmaster.c:1704
#22 0x000000000072458e in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x106c350) at postmaster.c:1377
#23 0x000000000047d101 in main (argc=3, argv=0x106c350) at main.c:228
It's quite puzzling, though. If I had to guess, I'd say it's some sort
of memory management issue (either we're corrupting it somehow, or
perhaps using it after pfree).
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Oct 09, 2019 at 10:07:01AM -0400, Tom Lane wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
FWIW I can reproduce this - it's enough to do this on the 11 cluster
I failed to reproduce any problem from your example, but I was trying
in C locale on a Linux machine. What environment are you testing?regards, tom lane
test=# show lc_collate ;
lc_collate
------------
C.UTF-8
(1 row)
I can reproduce this pretty easily like this:
1) build 11
git checkout REL_11_STABLE
./configure --prefix=/home/user/pg-11 --enable-debug --enable-cassert && make -s clean && make -s -j4 install
2) build 12
git checkout REL_12_STABLE
./configure --prefix=/home/user/pg-12 --enable-debug --enable-cassert && make -s clean && make -s -j4 install
3) create the 11 cluster
/home/user/pg-11/bin/pg_ctl -D /tmp/data-11 init
/home/user/pg-11/bin/pg_ctl -D /tmp/data-11 -l /tmp/pg-11.log start
/home/user/pg-11/bin/createdb test
/home/user/pg-11/bin/psql test
4) create the table
create table q_tbl_archiv as
with
qseason as (
select table_name,column_name, ordinal_position
,replace(column_name,'_season','') as col_qualifier
-- ,'id_'||replace(column_name,'_season','') as id_column
from information_schema.columns
order by table_name
)
select qs.*,c.column_name as id_column, c.column_default as id_default
from
qseason qs
left join information_schema.columns c on c.table_name=qs.table_name and
c.column_name like 'id_%';
5) shutdown the 11 cluster
/home/user/pg-11/bin/pg_ctl -D /tmp/data-11 stop
6) init 12 cluster
/home/user/pg-12/bin/pg_ctl -D /tmp/data-12 init
7) do the pg_upgrade thing
/home/user/pg-12/bin/pg_upgrade -b /home/user/pg-11/bin -B /home/user/pg-12/bin -d /tmp/data-11 -D /tmp/data-12 -k
8) start 12 cluster
/home/user/pg-12/bin/pg_ctl -D /tmp/data-12 -l /tmp/pg-12.log start
9) kabooom
/home/user/pg-12/bin/psql test -c "analyze q_tbl_archiv"
On my system (Fedora 30 in x86_64) this reliably results a crash (and
various other crashes as demonstrated in my previous message).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Hi Tomas,
Nice that you could reproduce it.
This was just the way I followed.
For your Info, here are my no-standard config params:
name | current_setting
------------------------------------+---------------------------------
application_name | psql
auto_explain.log_analyze | on
auto_explain.log_min_duration | 0
auto_explain.log_nested_statements | on
client_encoding | WIN1252
cluster_name | HB_DEV
data_checksums | on
DateStyle | ISO, DMY
default_text_search_config | pg_catalog.german
dynamic_shared_memory_type | windows
effective_cache_size | 8GB
lc_collate | C
lc_ctype | German_Germany.1252
lc_messages | C
lc_monetary | German_Germany.1252
lc_numeric | German_Germany.1252
lc_time | German_Germany.1252
log_destination | stderr
log_directory | N:/ZZ_log/pg_log_hbdev
log_error_verbosity | verbose
log_file_mode | 0640
log_line_prefix | WHB %a %t %i %e %2l:>
log_statement | mod
log_temp_files | 0
log_timezone | CET
logging_collector | on
maintenance_work_mem | 128MB
max_connections | 100
max_stack_depth | 2MB
max_wal_size | 1GB
min_wal_size | 80MB
pg_stat_statements.max | 5000
pg_stat_statements.track | all
random_page_cost | 1
search_path | public, archiv, ablage, admin
server_encoding | UTF8
server_version | 12.0
shared_buffers | 1GB
shared_preload_libraries | auto_explain,pg_stat_statements
temp_buffers | 32MB
TimeZone | CET
transaction_deferrable | off
transaction_isolation | read committed
transaction_read_only | off
update_process_title | off
wal_buffers | 16MB
wal_segment_size | 16MB
work_mem | 32MB
(48 rows)
Indeed, the database has UTF8 Encoding.
The Extended error-log (i have set auto_explain):
WHB psql 2019-10-09 15:45:03 CEST XX000 7:> ERROR: XX000: invalid memory alloc request size 18446744073709551613
WHB psql 2019-10-09 15:45:03 CEST XX000 8:> LOCATION: palloc, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\utils\mmgr\mcxt.c:934
WHB psql 2019-10-09 15:45:03 CEST XX000 9:> STATEMENT: select * from q_tbl_archiv;
WHB vacuumdb 2019-10-09 15:46:42 CEST 00000 1:> LOG: 00000: duration: 0.022 ms plan:
Query Text: SELECT pg_catalog.set_config('search_path', '', false);
Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.014..0.015 rows=1 loops=1)
WHB vacuumdb 2019-10-09 15:46:42 CEST 00000 2:> LOCATION: explain_ExecutorEnd, d:\pginstaller_12.auto\postgres.windows-x64\contrib\auto_explain\auto_explain.c:415
WHB vacuumdb 2019-10-09 15:46:42 CEST 00000 3:> LOG: 00000: duration: 0.072 ms plan:
Query Text: SELECT datname FROM pg_database WHERE datallowconn ORDER BY 1;
Sort (cost=1.16..1.16 rows=1 width=64) (actual time=0.063..0.064 rows=14 loops=1)
Sort Key: datname
Sort Method: quicksort Memory: 26kB
-> Seq Scan on pg_database (cost=0.00..1.15 rows=1 width=64) (actual time=0.018..0.022 rows=14 loops=1)
Filter: datallowconn
Rows Removed by Filter: 1
WHB vacuumdb 2019-10-09 15:46:42 CEST 00000 4:> LOCATION: explain_ExecutorEnd, d:\pginstaller_12.auto\postgres.windows-x64\contrib\auto_explain\auto_explain.c:415
WHB vacuumdb 2019-10-09 15:46:43 CEST 00000 1:> LOG: 00000: duration: 0.027 ms plan:
Query Text: SELECT pg_catalog.set_config('search_path', '', false);
Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.012..0.013 rows=1 loops=1)
WHB vacuumdb 2019-10-09 15:46:43 CEST 00000 2:> LOCATION: explain_ExecutorEnd, d:\pginstaller_12.auto\postgres.windows-x64\contrib\auto_explain\auto_explain.c:415
WHB vacuumdb 2019-10-09 15:46:43 CEST 00000 3:> LOG: 00000: duration: 1.036 ms plan:
Query Text: SELECT c.relname, ns.nspname FROM pg_catalog.pg_class c
JOIN pg_catalog.pg_namespace ns ON c.relnamespace OPERATOR(pg_catalog.=) ns.oid
LEFT JOIN pg_catalog.pg_class t ON c.reltoastrelid OPERATOR(pg_catalog.=) t.oid
WHERE c.relkind OPERATOR(pg_catalog.=) ANY (array['r', 'm'])
ORDER BY c.relpages DESC;
Sort (cost=56.56..56.59 rows=13 width=132) (actual time=0.843..0.854 rows=320 loops=1)
Sort Key: c.relpages DESC
Sort Method: quicksort Memory: 110kB
-> Hash Join (cost=1.23..56.32 rows=13 width=132) (actual time=0.082..0.649 rows=320 loops=1)
Hash Cond: (c.relnamespace = ns.oid)
-> Seq Scan on pg_class c (cost=0.00..55.05 rows=13 width=76) (actual time=0.034..0.545 rows=320 loops=1)
Filter: ((relkind)::text = ANY ('{r,m}'::text[]))
Rows Removed by Filter: 950
-> Hash (cost=1.10..1.10 rows=10 width=68) (actual time=0.022..0.022 rows=10 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on pg_namespace ns (cost=0.00..1.10 rows=10 width=68) (actual time=0.010..0.011 rows=10 loops=1)
WHB vacuumdb 2019-10-09 15:46:43 CEST 00000 4:> LOCATION: explain_ExecutorEnd, d:\pginstaller_12.auto\postgres.windows-x64\contrib\auto_explain\auto_explain.c:415
WHB vacuumdb 2019-10-09 15:46:43 CEST 00000 5:> LOG: 00000: duration: 0.011 ms plan:
Query Text: SELECT pg_catalog.set_config('search_path', '', false);
Result (cost=0.00..0.01 rows=1 width=32) (actual time=0.008..0.008 rows=1 loops=1)
WHB vacuumdb 2019-10-09 15:46:43 CEST 00000 6:> LOCATION: explain_ExecutorEnd, d:\pginstaller_12.auto\postgres.windows-x64\contrib\auto_explain\auto_explain.c:415
WHB 2019-10-09 15:47:01 CEST 00000 22:> LOG: 00000: server process (PID 4708) was terminated by exception 0xC0000005
WHB 2019-10-09 15:47:01 CEST 00000 23:> DETAIL: Failed process was running: ANALYZE admin.q_tbl_archiv;
WHB 2019-10-09 15:47:01 CEST 00000 24:> HINT: See C include file "ntstatus.h" for a description of the hexadecimal value.
WHB 2019-10-09 15:47:01 CEST 00000 25:> LOCATION: LogChildExit, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:3670
WHB 2019-10-09 15:47:01 CEST 00000 26:> LOG: 00000: terminating any other active server processes
WHB 2019-10-09 15:47:01 CEST 00000 27:> LOCATION: HandleChildCrash, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:3400
WHB psql 2019-10-09 15:47:01 CEST 57P02 10:> WARNING: 57P02: terminating connection because of crash of another server process
WHB psql 2019-10-09 15:47:01 CEST 57P02 11:> DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
WHB psql 2019-10-09 15:47:01 CEST 57P02 12:> HINT: In a moment you should be able to reconnect to the database and repeat your command.
WHB psql 2019-10-09 15:47:01 CEST 57P02 13:> LOCATION: quickdie, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\tcop\postgres.c:2717
WHB 2019-10-09 15:47:02 CEST 57P02 3:> WARNING: 57P02: terminating connection because of crash of another server process
WHB 2019-10-09 15:47:02 CEST 57P02 4:> DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
WHB 2019-10-09 15:47:02 CEST 57P02 5:> HINT: In a moment you should be able to reconnect to the database and repeat your command.
WHB 2019-10-09 15:47:02 CEST 57P02 6:> LOCATION: quickdie, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\tcop\postgres.c:2717
WHB 2019-10-09 15:47:02 CEST 00000 28:> LOG: 00000: all server processes terminated; reinitializing
WHB 2019-10-09 15:47:02 CEST 00000 29:> LOCATION: PostmasterStateMachine, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\postmaster\postmaster.c:3912
WHB 2019-10-09 15:47:02 CEST 00000 1:> LOG: 00000: database system was interrupted; last known up at 2019-10-09 15:46:03 CEST
WHB 2019-10-09 15:47:02 CEST 00000 2:> LOCATION: StartupXLOG, d:\pginstaller_12.auto\postgres.windows-x64\src\backend\access\transam\xlog.c:6277
The table was imported successively by pg_dump/pg_restore from the previous versions into pg11.
This was the same what I did on the other machine (pg 11.5). On this test machine I could successfully Export the table with pg_dump -t.
On the erroneous PG12 Cluster I succeeded to recreate a similar table with the original create table Statements: no Errors.
Under PG12 upgraded, I tried to select only the first column (select table_name from q_tbl_archiv) and got erroneaus results (shown first 2 entries):
cpsdb=# select table_name from q_tbl_archiv;
table_name
---------------------------------------------
\x11chemmat\x17chm_season
!collectionsheet\x15cs_season
It seems that the length Bytes are present in the Output.
Hope this Information helps.
Hans Buschmann
Well, I think I found the root cause. It's because of 7c15cef86d, which
changed the definition of sql_identifier so that it's a domain over name
instead of varchar. So we now have this:
SELECT typname, typlen FROM pg_type WHERE typname = 'sql_identifier':
-[ RECORD 1 ]--+---------------
typname | sql_identifier
typlen | -1
instead of this
-[ RECORD 1 ]--+---------------
typname | sql_identifier
typlen | 64
Unfortunately, that seems very much like a break of on-disk format, and
after pg_upgrade any table containing sql_identifier columns is pretty
much guaranteed to be badly mangled. For example, the first row from the
table used in the original report looks like this on PostgreSQL 11:
test=# select ctid, * from q_tbl_archiv limit 1;
-[ RECORD 1 ]----+--------------------------
ctid | (0,1)
table_name | _pg_foreign_data_wrappers
column_name | foreign_data_wrapper_name
ordinal_position | 5
col_qualifier | foreign_data_wrapper_name
id_column |
id_default |
while on PostgreSQL 12 after pg_upgrade it looks like this
test=# select ctid, table_name, column_name, ordinal_position from q_tbl_archiv limit 1;:
-[ RECORD 1 ]----+---------------------------------------------------------
ctid | (0,1)
table_name | 5_pg_foreign_data_wrappers5foreign_data_wrapper_name\x05
column_name | _data_wrapper_name
ordinal_position | 0
Not sure what to do about this :-(
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
Well, I think I found the root cause. It's because of 7c15cef86d, which
changed the definition of sql_identifier so that it's a domain over name
instead of varchar.
Ah...
Not sure what to do about this :-(
Fortunately, there should be close to zero people with user tables
depending on sql_identifier. I think we should just add a test in
pg_upgrade that refuses to upgrade if there are any such columns.
It won't be the first such restriction.
regards, tom lane
On Wed, Oct 09, 2019 at 07:18:45PM -0400, Tom Lane wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
Well, I think I found the root cause. It's because of 7c15cef86d, which
changed the definition of sql_identifier so that it's a domain over name
instead of varchar.Ah...
Not sure what to do about this :-(
Fortunately, there should be close to zero people with user tables
depending on sql_identifier. I think we should just add a test in
pg_upgrade that refuses to upgrade if there are any such columns.
It won't be the first such restriction.
Hmmm, yeah. I agree the number of people using sql_identifier in user
tables is low, but OTOH we got this report within a week after release,
so maybe it's higher than we think.
Another option would be to teach pg_upgrade to switch the columns to
'text' or 'varchar', not sure if that's possible or how much work would
that be.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
On Wed, Oct 09, 2019 at 07:18:45PM -0400, Tom Lane wrote:
Fortunately, there should be close to zero people with user tables
depending on sql_identifier. I think we should just add a test in
pg_upgrade that refuses to upgrade if there are any such columns.
It won't be the first such restriction.
Hmmm, yeah. I agree the number of people using sql_identifier in user
tables is low, but OTOH we got this report within a week after release,
so maybe it's higher than we think.
True.
Another option would be to teach pg_upgrade to switch the columns to
'text' or 'varchar', not sure if that's possible or how much work would
that be.
I think it'd be a mess --- the actual hacking would have to happen in
pg_dump, I think, and it'd be a kluge because pg_dump doesn't normally
understand what server version its output is going to. So we'd more
or less have to control it through a new pg_dump switch that pg_upgrade
would use. Ick.
Also, even if we did try to silently convert such columns that way,
I bet we'd get other bug reports about "why'd my columns suddenly
change type?". So I'd rather force the user to be involved.
regards, tom lane
On 2019-10-09 19:41:54 -0400, Tom Lane wrote:
Also, even if we did try to silently convert such columns that way,
I bet we'd get other bug reports about "why'd my columns suddenly
change type?". So I'd rather force the user to be involved.
+1
On Wed, Oct 09, 2019 at 06:48:13PM -0700, Andres Freund wrote:
On 2019-10-09 19:41:54 -0400, Tom Lane wrote:
Also, even if we did try to silently convert such columns that way,
I bet we'd get other bug reports about "why'd my columns suddenly
change type?". So I'd rather force the user to be involved.+1
Fair enough, attached is a patch doing that, I think. Maybe the file
should be named differently, as it contains other objects than just
tables.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
pg-upgrade-sql-identifier-fix.patchtext/plain; charset=us-asciiDownload
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 617270f101..c6dc50f3e6 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -108,6 +108,13 @@ check_and_dump_old_cluster(bool live_check)
if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1100)
check_for_tables_with_oids(&old_cluster);
+ /*
+ * PG 12 changed the 'sql_identifier' type storage format, so we need
+ * to prevent upgrade when used in user objects (tables, indexes, ...)
+ */
+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1100)
+ old_11_check_for_sql_identifier_data_type_usage(&old_cluster);
+
/*
* Pre-PG 10 allowed tables with 'unknown' type columns and non WAL logged
* hash indexes
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 5d31750d86..63574b51bc 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -458,6 +458,8 @@ void old_9_6_check_for_unknown_data_type_usage(ClusterInfo *cluster);
void old_9_6_invalidate_hash_indexes(ClusterInfo *cluster,
bool check_mode);
+void old_11_check_for_sql_identifier_data_type_usage(ClusterInfo *cluster);
+
/* parallel.c */
void parallel_exec_prog(const char *log_file, const char *opt_log_file,
const char *fmt,...) pg_attribute_printf(3, 4);
diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c
index 10cb362e09..8d766d3d3a 100644
--- a/src/bin/pg_upgrade/version.c
+++ b/src/bin/pg_upgrade/version.c
@@ -399,3 +399,101 @@ old_9_6_invalidate_hash_indexes(ClusterInfo *cluster, bool check_mode)
else
check_ok();
}
+
+/*
+ * old_11_check_for_sql_identifier_data_type_usage()
+ * 11 -> 12
+ * In 12, the sql_identifier data type was switched from name to varchar,
+ * which does affect the storage (name is by-ref, but not varlena). This
+ * means user tables using sql_identifier for columns are broken because
+ * the on-disk format is different.
+ *
+ * We need to check all objects that might store sql_identifier on disk,
+ * i.e. tables, matviews and indexes. Also check composite types in case
+ * they are used in this context.
+ */
+void
+old_11_check_for_sql_identifier_data_type_usage(ClusterInfo *cluster)
+{
+ int dbnum;
+ FILE *script = NULL;
+ bool found = false;
+ char output_path[MAXPGPATH];
+
+ prep_status("Checking for invalid \"sql_identifier\" user columns");
+
+ snprintf(output_path, sizeof(output_path), "tables_using_sql_identifier.txt");
+
+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
+ {
+ PGresult *res;
+ bool db_used = false;
+ int ntups;
+ int rowno;
+ int i_nspname,
+ i_relname,
+ i_attname;
+ DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
+ PGconn *conn = connectToServer(cluster, active_db->db_name);
+
+ res = executeQueryOrDie(conn,
+ "SELECT n.nspname, c.relname, a.attname "
+ "FROM pg_catalog.pg_class c, "
+ " pg_catalog.pg_namespace n, "
+ " pg_catalog.pg_attribute a "
+ "WHERE c.oid = a.attrelid AND "
+ " NOT a.attisdropped AND "
+ " a.atttypid = 'information_schema.sql_identifier'::pg_catalog.regtype AND "
+ " c.relkind IN ("
+ CppAsString2(RELKIND_RELATION) ", "
+ CppAsString2(RELKIND_COMPOSITE_TYPE) ", "
+ CppAsString2(RELKIND_MATVIEW) ", "
+ CppAsString2(RELKIND_INDEX) ") AND "
+ " c.relnamespace = n.oid AND "
+ /* exclude possible orphaned temp tables */
+ " n.nspname !~ '^pg_temp_' AND "
+ " n.nspname !~ '^pg_toast_temp_' AND "
+ " n.nspname NOT IN ('pg_catalog', 'information_schema')");
+
+ ntups = PQntuples(res);
+ i_nspname = PQfnumber(res, "nspname");
+ i_relname = PQfnumber(res, "relname");
+ i_attname = PQfnumber(res, "attname");
+ for (rowno = 0; rowno < ntups; rowno++)
+ {
+ found = true;
+ if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
+ pg_fatal("could not open file \"%s\": %s\n", output_path,
+ strerror(errno));
+ if (!db_used)
+ {
+ fprintf(script, "Database: %s\n", active_db->db_name);
+ db_used = true;
+ }
+ fprintf(script, " %s.%s.%s\n",
+ PQgetvalue(res, rowno, i_nspname),
+ PQgetvalue(res, rowno, i_relname),
+ PQgetvalue(res, rowno, i_attname));
+ }
+
+ PQclear(res);
+
+ PQfinish(conn);
+ }
+
+ if (script)
+ fclose(script);
+
+ if (found)
+ {
+ pg_log(PG_REPORT, "fatal\n");
+ pg_fatal("Your installation contains the \"sql_identifier\" data type in user tables\n"
+ "and/or indexes. The on-disk format for this data type has changed, so this\n"
+ "cluster cannot currently be upgraded. You can remove the problem tables or\n"
+ "change the data type to \"name\" and restart the upgrade.\n"
+ "A list of the problem columns is in the file:\n"
+ " %s\n\n", output_path);
+ }
+ else
+ check_ok();
+}
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
Fair enough, attached is a patch doing that, I think. Maybe the file
should be named differently, as it contains other objects than just
tables.
Seems about right, though I notice it will not detect domains over
sql_identifier. How much do we care about that?
To identify such domains, I think we'd need something like
WHERE attypid IN (recursive-WITH-query), which makes me nervous.
We did support those starting with 8.4, which is as far back as
pg_upgrade will go, so in theory it should work. But I think we
had bugs with such cases in old releases. Do we want to assume
that the source server has been updated enough to avoid any such
bugs? The expense of such a query might be daunting, too.
regards, tom lane
On Thu, Oct 10, 2019 at 10:19:12AM -0400, Tom Lane wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
Fair enough, attached is a patch doing that, I think. Maybe the file
should be named differently, as it contains other objects than just
tables.Seems about right, though I notice it will not detect domains over
sql_identifier. How much do we care about that?To identify such domains, I think we'd need something like
WHERE attypid IN (recursive-WITH-query), which makes me nervous.
We did support those starting with 8.4, which is as far back as
pg_upgrade will go, so in theory it should work. But I think we
had bugs with such cases in old releases. Do we want to assume
that the source server has been updated enough to avoid any such
bugs? The expense of such a query might be daunting, too.
Not sure.
Regarding bugs, I think it's fine to assume the users are running
sufficiently recent version - they may not, but then they're probably
subject to various other bugs (data corruption, queries). If they're
not, then they'll either get false positives (in which case they'll be
forced to update) or false negatives (which is just as if we did
nothing).
For the query cost, I think we can assume the domain hierarchies are not
particularly deep (in practice I'd expect just domains directly on the
sql_identifier type). And I doubt people are using that very widely,
it's probably more like this report - ad-hoc CTAS, so just a couple of
items. So I wouldn't expect it to be a huge deal in most cases. But even
if it takes a second or two, it's a one-time cost.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
On Thu, Oct 10, 2019 at 10:19:12AM -0400, Tom Lane wrote:
To identify such domains, I think we'd need something like
WHERE attypid IN (recursive-WITH-query), which makes me nervous.
We did support those starting with 8.4, which is as far back as
pg_upgrade will go, so in theory it should work. But I think we
had bugs with such cases in old releases. Do we want to assume
that the source server has been updated enough to avoid any such
bugs? The expense of such a query might be daunting, too.
For the query cost, I think we can assume the domain hierarchies are not
particularly deep (in practice I'd expect just domains directly on the
sql_identifier type). And I doubt people are using that very widely,
it's probably more like this report - ad-hoc CTAS, so just a couple of
items. So I wouldn't expect it to be a huge deal in most cases. But even
if it takes a second or two, it's a one-time cost.
What I was worried about was the planner possibly trying to apply the
atttypid restriction as a scan qual using a subplan, which might be rather
awful. But it doesn't look like that happens. I get a hash semijoin to
the CTE output, in all branches back to 8.4, on this trial query:
explain
with recursive sqlidoids(toid) as (
select 'information_schema.sql_identifier'::pg_catalog.regtype as toid
union
select oid from pg_catalog.pg_type, sqlidoids
where typtype = 'd' and typbasetype = sqlidoids.toid
)
SELECT n.nspname, c.relname, a.attname
FROM pg_catalog.pg_class c,
pg_catalog.pg_namespace n,
pg_catalog.pg_attribute a
WHERE c.oid = a.attrelid AND
NOT a.attisdropped AND
a.atttypid in (select toid from sqlidoids) AND
c.relkind IN ('r','v','i') and
c.relnamespace = n.oid AND
n.nspname !~ '^pg_temp_' AND
n.nspname !~ '^pg_toast_temp_' AND
n.nspname NOT IN ('pg_catalog', 'information_schema');
regards, tom lane
On Thu, Oct 10, 2019 at 04:14:20PM -0400, Tom Lane wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
On Thu, Oct 10, 2019 at 10:19:12AM -0400, Tom Lane wrote:
To identify such domains, I think we'd need something like
WHERE attypid IN (recursive-WITH-query), which makes me nervous.
We did support those starting with 8.4, which is as far back as
pg_upgrade will go, so in theory it should work. But I think we
had bugs with such cases in old releases. Do we want to assume
that the source server has been updated enough to avoid any such
bugs? The expense of such a query might be daunting, too.For the query cost, I think we can assume the domain hierarchies are not
particularly deep (in practice I'd expect just domains directly on the
sql_identifier type). And I doubt people are using that very widely,
it's probably more like this report - ad-hoc CTAS, so just a couple of
items. So I wouldn't expect it to be a huge deal in most cases. But even
if it takes a second or two, it's a one-time cost.What I was worried about was the planner possibly trying to apply the
atttypid restriction as a scan qual using a subplan, which might be rather
awful. But it doesn't look like that happens.
OK.
I get a hash semijoin to
the CTE output, in all branches back to 8.4, on this trial query:explain
with recursive sqlidoids(toid) as (
select 'information_schema.sql_identifier'::pg_catalog.regtype as toid
union
select oid from pg_catalog.pg_type, sqlidoids
where typtype = 'd' and typbasetype = sqlidoids.toid
)
SELECT n.nspname, c.relname, a.attname
FROM pg_catalog.pg_class c,
pg_catalog.pg_namespace n,
pg_catalog.pg_attribute a
WHERE c.oid = a.attrelid AND
NOT a.attisdropped AND
a.atttypid in (select toid from sqlidoids) AND
c.relkind IN ('r','v','i') and
c.relnamespace = n.oid AND
n.nspname !~ '^pg_temp_' AND
n.nspname !~ '^pg_toast_temp_' AND
n.nspname NOT IN ('pg_catalog', 'information_schema');
I think that's not quite sufficient - the problem is that we can have
domains and composite types on sql_identifier, in some arbitrary order.
And the recursive CTE won't handle that the way it's written - it will
miss domains on composite types containing sql_identifier. And we have
quite a few of them in the information schema, so maybe someone created
a domain on one of those (however unlikely it may seem).
I think this recursive CTE does it correctly:
WITH RECURSIVE oids AS (
-- type itself
SELECT 'information_schema.sql_identifier'::regtype AS oid
UNION ALL
SELECT * FROM (
-- domains on the type
WITH x AS (SELECT oid FROM oids)
SELECT t.oid FROM pg_catalog.pg_type t, x WHERE typbasetype = x.oid AND typtype = 'd'
UNION
-- composite types containing the type
SELECT t.oid FROM pg_catalog.pg_type t, pg_catalog.pg_class c, pg_catalog.pg_attribute a, x
WHERE t.typtype = 'c' AND
t.oid = c.reltype AND
c.oid = a.attrelid AND
a.atttypid = x.oid
) foo
)
I had to use CTE within CTE, because the 'oids' can be referenced only
once, but we have two subqueries there. Maybe there's a better solution.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
OK,
here is an updated patch, with the recursive CTE. I've done a fair
amount of testing on it on older versions (up to 9.4), and it seems to
work just fine.
Another thing that I noticed is that the query does not need to look at
RELKIND_COMPOSITE_TYPE, because we only really care about cases when
sql_identifier is stored on-disk. Composite type alone does not do that,
and the CTE includes OIDs of composite types that we then check against
relations and matviews.
Barring objections, I'll push this early next week.
BTW the query (including the RELKIND_COMPSITE_TYPE) was essentially just
a lightly-massaged copy of old_9_6_check_for_unknown_data_type_usage, so
that seems wrong too. The comment explicitly says:
* Also check composite types, in case they are used for table columns.
but even a simple "create type c as (a unknown, b int)" without any
table using it enough to trigger the failure. But maybe it's
intentional, not sure.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
pg-upgrade-sql-identifier-fix-v2.patchtext/plain; charset=us-asciiDownload
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 617270f101..c6dc50f3e6 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -108,6 +108,13 @@ check_and_dump_old_cluster(bool live_check)
if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1100)
check_for_tables_with_oids(&old_cluster);
+ /*
+ * PG 12 changed the 'sql_identifier' type storage format, so we need
+ * to prevent upgrade when used in user objects (tables, indexes, ...)
+ */
+ if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1100)
+ old_11_check_for_sql_identifier_data_type_usage(&old_cluster);
+
/*
* Pre-PG 10 allowed tables with 'unknown' type columns and non WAL logged
* hash indexes
diff --git a/src/bin/pg_upgrade/pg_upgrade.h b/src/bin/pg_upgrade/pg_upgrade.h
index 5d31750d86..63574b51bc 100644
--- a/src/bin/pg_upgrade/pg_upgrade.h
+++ b/src/bin/pg_upgrade/pg_upgrade.h
@@ -458,6 +458,8 @@ void old_9_6_check_for_unknown_data_type_usage(ClusterInfo *cluster);
void old_9_6_invalidate_hash_indexes(ClusterInfo *cluster,
bool check_mode);
+void old_11_check_for_sql_identifier_data_type_usage(ClusterInfo *cluster);
+
/* parallel.c */
void parallel_exec_prog(const char *log_file, const char *opt_log_file,
const char *fmt,...) pg_attribute_printf(3, 4);
diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c
index 10cb362e09..e0b9a9f574 100644
--- a/src/bin/pg_upgrade/version.c
+++ b/src/bin/pg_upgrade/version.c
@@ -399,3 +399,121 @@ old_9_6_invalidate_hash_indexes(ClusterInfo *cluster, bool check_mode)
else
check_ok();
}
+
+/*
+ * old_11_check_for_sql_identifier_data_type_usage()
+ * 11 -> 12
+ * In 12, the sql_identifier data type was switched from name to varchar,
+ * which does affect the storage (name is by-ref, but not varlena). This
+ * means user tables using sql_identifier for columns are broken because
+ * the on-disk format is different.
+ *
+ * We need to check all objects that might store sql_identifier on disk,
+ * i.e. tables, matviews and indexes. Also check composite types in case
+ * they are used in this context.
+ */
+void
+old_11_check_for_sql_identifier_data_type_usage(ClusterInfo *cluster)
+{
+ int dbnum;
+ FILE *script = NULL;
+ bool found = false;
+ char output_path[MAXPGPATH];
+
+ prep_status("Checking for invalid \"sql_identifier\" user columns");
+
+ snprintf(output_path, sizeof(output_path), "tables_using_sql_identifier.txt");
+
+ for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
+ {
+ PGresult *res;
+ bool db_used = false;
+ int ntups;
+ int rowno;
+ int i_nspname,
+ i_relname,
+ i_attname;
+ DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
+ PGconn *conn = connectToServer(cluster, active_db->db_name);
+
+ /*
+ * We need the recursive CTE because the sql_identifier may be wrapped
+ * either in a domain or composite type, or both (in arbitrary order).
+ */
+ res = executeQueryOrDie(conn,
+ "WITH RECURSIVE oids AS ( "
+ /* the sql_identifier type itself */
+ " SELECT 'information_schema.sql_identifier'::regtype AS oid "
+ " UNION ALL "
+ " SELECT * FROM ( "
+ /* domains on the type */
+ " WITH x AS (SELECT oid FROM oids) "
+ " SELECT t.oid FROM pg_catalog.pg_type t, x WHERE typbasetype = x.oid AND typtype = 'd' "
+ " UNION "
+ /* composite types containing the type */
+ " SELECT t.oid FROM pg_catalog.pg_type t, pg_catalog.pg_class c, pg_catalog.pg_attribute a, x "
+ " WHERE t.typtype = 'c' AND "
+ " t.oid = c.reltype AND "
+ " c.oid = a.attrelid AND "
+ " a.atttypid = x.oid "
+ " ) foo "
+ ") "
+ "SELECT n.nspname, c.relname, a.attname "
+ "FROM pg_catalog.pg_class c, "
+ " pg_catalog.pg_namespace n, "
+ " pg_catalog.pg_attribute a "
+ "WHERE c.oid = a.attrelid AND "
+ " NOT a.attisdropped AND "
+ " a.atttypid IN (SELECT oid FROM oids) AND "
+ " c.relkind IN ("
+ CppAsString2(RELKIND_RELATION) ", "
+ CppAsString2(RELKIND_MATVIEW) ", "
+ CppAsString2(RELKIND_INDEX) ") AND "
+ " c.relnamespace = n.oid AND "
+ /* exclude possible orphaned temp tables */
+ " n.nspname !~ '^pg_temp_' AND "
+ " n.nspname !~ '^pg_toast_temp_' AND "
+ " n.nspname NOT IN ('pg_catalog', 'information_schema')");
+
+ ntups = PQntuples(res);
+ i_nspname = PQfnumber(res, "nspname");
+ i_relname = PQfnumber(res, "relname");
+ i_attname = PQfnumber(res, "attname");
+ for (rowno = 0; rowno < ntups; rowno++)
+ {
+ found = true;
+ if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
+ pg_fatal("could not open file \"%s\": %s\n", output_path,
+ strerror(errno));
+ if (!db_used)
+ {
+ fprintf(script, "Database: %s\n", active_db->db_name);
+ db_used = true;
+ }
+ fprintf(script, " %s.%s.%s\n",
+ PQgetvalue(res, rowno, i_nspname),
+ PQgetvalue(res, rowno, i_relname),
+ PQgetvalue(res, rowno, i_attname));
+ }
+
+ PQclear(res);
+
+ PQfinish(conn);
+ }
+
+ if (script)
+ fclose(script);
+
+ if (found)
+ {
+ pg_log(PG_REPORT, "fatal\n");
+ pg_fatal("Your installation contains the \"sql_identifier\" data type in user tables\n"
+ "and/or indexes. The on-disk format for this data type has changed, so this\n"
+ "cluster cannot currently be upgraded. You can remove the problem tables or\n"
+ "change the data type to \"name\" and restart the upgrade.\n"
+ "A list of the problem columns is in the file:\n"
+ " %s\n\n", output_path);
+ }
+ else
+ check_ok();
+}
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
here is an updated patch, with the recursive CTE. I've done a fair
amount of testing on it on older versions (up to 9.4), and it seems to
work just fine.
Might be a good idea to exclude attisdropped columns in the part of the
recursive query that's looking for sql_identifier columns of composite
types. I'm not sure if composites can have dropped columns today,
but even if they can't it seems like a wise bit of future-proofing.
(We'll no doubt have occasion to use this logic again...)
Looks good other than that nit.
BTW the query (including the RELKIND_COMPSITE_TYPE) was essentially just
a lightly-massaged copy of old_9_6_check_for_unknown_data_type_usage, so
that seems wrong too.
Yeah, we should back-port this logic into that check too, IMO.
regards, tom lane
On Sun, Oct 13, 2019 at 02:26:48PM -0400, Tom Lane wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
here is an updated patch, with the recursive CTE. I've done a fair
amount of testing on it on older versions (up to 9.4), and it seems to
work just fine.Might be a good idea to exclude attisdropped columns in the part of the
recursive query that's looking for sql_identifier columns of composite
types. I'm not sure if composites can have dropped columns today,
but even if they can't it seems like a wise bit of future-proofing.
(We'll no doubt have occasion to use this logic again...)
Hmm? How could that be safe? Let's say we have a composite type with a
sql_identifier column, it's used in a table with data, and we drop the
column. We need the pg_type information to parse the existing, so how
could we skip attisdropped columns?
Looks good other than that nit.
BTW the query (including the RELKIND_COMPSITE_TYPE) was essentially just
a lightly-massaged copy of old_9_6_check_for_unknown_data_type_usage, so
that seems wrong too.Yeah, we should back-port this logic into that check too, IMO.
You mean the recursive CTE, removal of RELKIND_COMPOSITE_TYPE or the
proposed change w.r.t. dropped columns?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
On Sun, Oct 13, 2019 at 02:26:48PM -0400, Tom Lane wrote:
Might be a good idea to exclude attisdropped columns in the part of the
recursive query that's looking for sql_identifier columns of composite
types. I'm not sure if composites can have dropped columns today,
[ I checked this, they can ]
but even if they can't it seems like a wise bit of future-proofing.
(We'll no doubt have occasion to use this logic again...)
Hmm? How could that be safe? Let's say we have a composite type with a
sql_identifier column, it's used in a table with data, and we drop the
column. We need the pg_type information to parse the existing, so how
could we skip attisdropped columns?
It works exactly like it does for table rowtypes.
regression=# create type cfoo as (f1 int, f2 int, f3 int);
CREATE TYPE
regression=# alter type cfoo drop attribute f2;
ALTER TYPE
regression=# select attname,atttypid,attisdropped,attlen,attalign from pg_attribute where attrelid = 'cfoo'::regclass;
attname | atttypid | attisdropped | attlen | attalign
------------------------------+----------+--------------+--------+----------
f1 | 23 | f | 4 | i
........pg.dropped.2........ | 0 | t | 4 | i
f3 | 23 | f | 4 | i
(3 rows)
All we need to skip over the dead data is attlen/attalign, which are
preserved in pg_attribute even if the pg_type row is gone.
As this example shows, you don't really *have* to check attisdropped
because atttypid will be set to zero. But the latter is just a
defense measure in case somebody forgets to check attisdropped;
you're not supposed to forget that.
regards, tom lane
On Mon, Oct 14, 2019 at 10:16:40AM -0400, Tom Lane wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
On Sun, Oct 13, 2019 at 02:26:48PM -0400, Tom Lane wrote:
Might be a good idea to exclude attisdropped columns in the part of the
recursive query that's looking for sql_identifier columns of composite
types. I'm not sure if composites can have dropped columns today,[ I checked this, they can ]
but even if they can't it seems like a wise bit of future-proofing.
(We'll no doubt have occasion to use this logic again...)Hmm? How could that be safe? Let's say we have a composite type with a
sql_identifier column, it's used in a table with data, and we drop the
column. We need the pg_type information to parse the existing, so how
could we skip attisdropped columns?It works exactly like it does for table rowtypes.
regression=# create type cfoo as (f1 int, f2 int, f3 int);
CREATE TYPE
regression=# alter type cfoo drop attribute f2;
ALTER TYPE
regression=# select attname,atttypid,attisdropped,attlen,attalign from pg_attribute where attrelid = 'cfoo'::regclass;
attname | atttypid | attisdropped | attlen | attalign
------------------------------+----------+--------------+--------+----------
f1 | 23 | f | 4 | i
........pg.dropped.2........ | 0 | t | 4 | i
f3 | 23 | f | 4 | i
(3 rows)All we need to skip over the dead data is attlen/attalign, which are
preserved in pg_attribute even if the pg_type row is gone.As this example shows, you don't really *have* to check attisdropped
because atttypid will be set to zero. But the latter is just a
defense measure in case somebody forgets to check attisdropped;
you're not supposed to forget that.
Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for
clarifying, I'll polish and push the fix shortly.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:
...
Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for
clarifying, I'll polish and push the fix shortly.
I've pushed and backpatched the fix. Attached are similar fixes for the
existing pg_upgrade checks for pg_catalog.line and pg_catalog.unknown
types, which have the same issues with composite types and domains.
There are some additional details & examples in the commit messages.
I've kept this in two patches primarily because of backpatching - the
line fix should go back up to 9.4, the unknown is for 10.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachments:
0001-Correct-the-check-for-pg_catalog.line-in-pg_upgrade.patchtext/plain; charset=us-asciiDownload
From 52821ea02fca502ff070508ee7be278563117509 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tv@fuzzy.cz>
Date: Tue, 15 Oct 2019 01:16:21 +0200
Subject: [PATCH 1/2] Correct the check for pg_catalog.line in pg_upgrade
The pg_upgrade check for pg_catalog.line data type when upgrading from
9.3 had a number of issues with domains and composite types. Firstly, it
detected even composite types unused in objects with storage. So for
example this was enough to trigger pg_upgrade failure:
CREATE TYPE line_composite AS (l pg_catalog.line)
On the other hand, this only happened with composite types directly on
the pg_catalog.line data type, but not with a domain. So this was not
detected
CREATE DOMAIN line_domain AS pg_catalog.line;
CREATE TYPE line_composite_2 AS (l line_domain);
unlike the first example. What's worse, we have not detected this even
when used in a table. So we missed cases like this:
CREATE TABLE t (l line_composite_2);
This fixes these false positives and false negatives by adopting the same
recursive CTE introduced by eaf900e842 for sql_identifier. Backpatch all
the way to 9.4, where the storage for pg_catalog.line data type changed.
Author: Tomas Vondra
Backpatch-to: 9.4-
Discussion: https://postgr.es/m/16045-673e8fa6b5ace196%40postgresql.org
---
src/bin/pg_upgrade/version.c | 29 ++++++++++++++++++++++++++++-
1 file changed, 28 insertions(+), 1 deletion(-)
diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c
index 8375a46454..cfe69ea554 100644
--- a/src/bin/pg_upgrade/version.c
+++ b/src/bin/pg_upgrade/version.c
@@ -131,14 +131,41 @@ old_9_3_check_for_line_data_type_usage(ClusterInfo *cluster)
DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
PGconn *conn = connectToServer(cluster, active_db->db_name);
+ /*
+ * We need the recursive CTE because the pg_catalog.line may be wrapped
+ * either in a domain or composite type, or both (9.3 did not allow domains
+ * on composite types, but there may be multi-level composite type).
+ */
res = executeQueryOrDie(conn,
+ "WITH RECURSIVE oids AS ( "
+ /* the pg_catalog.line type itself */
+ " SELECT 'pg_catalog.line'::pg_catalog.regtype AS oid "
+ " UNION ALL "
+ " SELECT * FROM ( "
+ /* domains on the type */
+ " WITH x AS (SELECT oid FROM oids) "
+ " SELECT t.oid FROM pg_catalog.pg_type t, x WHERE typbasetype = x.oid AND typtype = 'd' "
+ " UNION "
+ /* composite types containing the type */
+ " SELECT t.oid FROM pg_catalog.pg_type t, pg_catalog.pg_class c, pg_catalog.pg_attribute a, x "
+ " WHERE t.typtype = 'c' AND "
+ " t.oid = c.reltype AND "
+ " c.oid = a.attrelid AND "
+ " NOT a.attisdropped AND "
+ " a.atttypid = x.oid "
+ " ) foo "
+ ") "
"SELECT n.nspname, c.relname, a.attname "
"FROM pg_catalog.pg_class c, "
" pg_catalog.pg_namespace n, "
" pg_catalog.pg_attribute a "
"WHERE c.oid = a.attrelid AND "
" NOT a.attisdropped AND "
- " a.atttypid = 'pg_catalog.line'::pg_catalog.regtype AND "
+ " a.atttypid IN (SELECT oid FROM oids) AND "
+ " c.relkind IN ("
+ CppAsString2(RELKIND_RELATION) ", "
+ CppAsString2(RELKIND_MATVIEW) ", "
+ CppAsString2(RELKIND_INDEX) ") AND "
" c.relnamespace = n.oid AND "
/* exclude possible orphaned temp tables */
" n.nspname !~ '^pg_temp_' AND "
--
2.21.0
0002-Correct-the-check-for-pg_catalog.unknown-in-pg_upgra.patchtext/plain; charset=us-asciiDownload
From de98ca82812798e6b7392196fdcf34dff8a78133 Mon Sep 17 00:00:00 2001
From: Tomas Vondra <tv@fuzzy.cz>
Date: Tue, 15 Oct 2019 01:16:47 +0200
Subject: [PATCH 2/2] Correct the check for pg_catalog.unknown in pg_upgrade
The pg_upgrade check for pg_catalog.unknown type when upgrading from 9.6
had a couple of issues with domains and composite types. Firstly, it
detected even composite types unused in objects with storage. So for
example this was enough to trigger pg_upgrade failure:
CREATE TYPE unknown_composite AS (u pg_catalog.unknown)
On the other hand, this only happened with composite types directly on
the pg_catalog.unknown data type, but not with a domain. So this was not
detected
CREATE DOMAIN unknown_domain AS pg_catalog.unknown;
CREATE TYPE unknown_composite_2 AS (u unknown_domain);
unlike the first example. What's worse, we have not detected this even
when used in a table. So we missed cases like this:
CREATE TABLE t (u unknown_composite_2);
This fixes these false positives and false negatives by using the same
recursive CTE introduced by eaf900e842 for sql_identifier. Backpatch all
the way to 10, where the of pg_catalog.unknown data type was restricted.
Author: Tomas Vondra
Backpatch-to: 10-
Discussion: https://postgr.es/m/16045-673e8fa6b5ace196%40postgresql.org
---
src/bin/pg_upgrade/version.c | 21 +++++++++++++++++++--
1 file changed, 19 insertions(+), 2 deletions(-)
diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c
index cfe69ea554..ccfb83c7eb 100644
--- a/src/bin/pg_upgrade/version.c
+++ b/src/bin/pg_upgrade/version.c
@@ -256,16 +256,33 @@ old_9_6_check_for_unknown_data_type_usage(ClusterInfo *cluster)
PGconn *conn = connectToServer(cluster, active_db->db_name);
res = executeQueryOrDie(conn,
+ "WITH RECURSIVE oids AS ( "
+ /* the sql_identifier type itself */
+ " SELECT 'pg_catalog.unknown'::pg_catalog.regtype AS oid "
+ " UNION ALL "
+ " SELECT * FROM ( "
+ /* domains on the type */
+ " WITH x AS (SELECT oid FROM oids) "
+ " SELECT t.oid FROM pg_catalog.pg_type t, x WHERE typbasetype = x.oid AND typtype = 'd' "
+ " UNION "
+ /* composite types containing the type */
+ " SELECT t.oid FROM pg_catalog.pg_type t, pg_catalog.pg_class c, pg_catalog.pg_attribute a, x "
+ " WHERE t.typtype = 'c' AND "
+ " t.oid = c.reltype AND "
+ " c.oid = a.attrelid AND "
+ " NOT a.attisdropped AND "
+ " a.atttypid = x.oid "
+ " ) foo "
+ ") "
"SELECT n.nspname, c.relname, a.attname "
"FROM pg_catalog.pg_class c, "
" pg_catalog.pg_namespace n, "
" pg_catalog.pg_attribute a "
"WHERE c.oid = a.attrelid AND "
" NOT a.attisdropped AND "
- " a.atttypid = 'pg_catalog.unknown'::pg_catalog.regtype AND "
+ " a.atttypid IN (SELECT oid FROM oids) AND "
" c.relkind IN ("
CppAsString2(RELKIND_RELATION) ", "
- CppAsString2(RELKIND_COMPOSITE_TYPE) ", "
CppAsString2(RELKIND_MATVIEW) ") AND "
" c.relnamespace = n.oid AND "
/* exclude possible orphaned temp tables */
--
2.21.0
On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:
On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:
...
Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for
clarifying, I'll polish and push the fix shortly.
Perhaps it'd be worth creating a test for on-disk format ?
Like a table with a column for each core type, which is either SELECTed from
after pg_upgrade, or pg_dump output compared before and after.
Justin
On Mon, Oct 14, 2019 at 11:41:18PM -0500, Justin Pryzby wrote:
On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:
On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:
...
Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for
clarifying, I'll polish and push the fix shortly.Perhaps it'd be worth creating a test for on-disk format ?
Like a table with a column for each core type, which is either SELECTed from
after pg_upgrade, or pg_dump output compared before and after.
IMO that would be useful - we now have a couple of these checks for
different data types (line, unknown, sql_identifier), with a couple of
combinations each. And I've been looking if we do similar pg_upgrade
tests, but I haven't found anything. I mean, we do pg_upgrade the
cluster used for regression tests, but here we need to test a number of
cases that are meant to abort the pg_upgrade. So we'd need a number of
pg_upgrade runs, to test that.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:
On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:
...
Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for
clarifying, I'll polish and push the fix shortly.I've pushed and backpatched the fix. Attached are similar fixes for the
existing pg_upgrade checks for pg_catalog.line and pg_catalog.unknown
types, which have the same issues with composite types and domains.There are some additional details & examples in the commit messages.
I've kept this in two patches primarily because of backpatching - the
line fix should go back up to 9.4, the unknown is for 10.
I've just committed and pushed both fixes after some minor corrections.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
I've just committed and pushed both fixes after some minor corrections.
Not quite right in 9.6 and before, according to crake. Looks like
some issue with the CppAsString2'd constants? Did we even have
CppAsString2 that far back?
regards, tom lane
I wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
I've just committed and pushed both fixes after some minor corrections.
Not quite right in 9.6 and before, according to crake. Looks like
some issue with the CppAsString2'd constants? Did we even have
CppAsString2 that far back?
Yeah, we did. On closer inspection I suspect that we need to #include
some other file to get the RELKIND_ constants in the old branches.
regards, tom lane
On Wed, Oct 16, 2019 at 03:26:42PM +0200, Tom Lane wrote:
I wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
I've just committed and pushed both fixes after some minor corrections.
Not quite right in 9.6 and before, according to crake. Looks like
some issue with the CppAsString2'd constants? Did we even have
CppAsString2 that far back?Yeah, we did. On closer inspection I suspect that we need to #include
some other file to get the RELKIND_ constants in the old branches.
Oh! Looking.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Oct 16, 2019 at 03:26:42PM +0200, Tom Lane wrote:
I wrote:
Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:
I've just committed and pushed both fixes after some minor corrections.
Not quite right in 9.6 and before, according to crake. Looks like
some issue with the CppAsString2'd constants? Did we even have
CppAsString2 that far back?Yeah, we did. On closer inspection I suspect that we need to #include
some other file to get the RELKIND_ constants in the old branches.
Yeah, the pg_class.h catalog was missing on pre-10 relases. It compiled
just fine, so I haven't noticed that during the backpatching :-(
Fixed, let's see if the buildfarm is happy with that.
regads
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Oct 15, 2019 at 02:18:17AM +0200, Tomas Vondra wrote:
On Mon, Oct 14, 2019 at 06:35:38PM +0200, Tomas Vondra wrote:
...
Aha! I forgot we copy the necessary stuff into pg_attribute. Thanks for
clarifying, I'll polish and push the fix shortly.I've pushed and backpatched the fix. Attached are similar fixes for the
existing pg_upgrade checks for pg_catalog.line and pg_catalog.unknown
types, which have the same issues with composite types and domains.
This comit added old_11_check_for_sql_identifier_data_type_usage(), but
it did not use the clearer database error list format added to the
master branch in commit 1634d36157. Attached is a patch to fix this,
which I have committed.
--
Bruce Momjian <bruce@momjian.us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
Attachments:
infoschema.difftext/x-diff; charset=us-asciiDownload
diff --git a/src/bin/pg_upgrade/version.c b/src/bin/pg_upgrade/version.c
new file mode 100644
index b64171d..3f7c8c5
*** a/src/bin/pg_upgrade/version.c
--- b/src/bin/pg_upgrade/version.c
*************** old_11_check_for_sql_identifier_data_typ
*** 540,546 ****
strerror(errno));
if (!db_used)
{
! fprintf(script, "Database: %s\n", active_db->db_name);
db_used = true;
}
fprintf(script, " %s.%s.%s\n",
--- 540,546 ----
strerror(errno));
if (!db_used)
{
! fprintf(script, "In database: %s\n", active_db->db_name);
db_used = true;
}
fprintf(script, " %s.%s.%s\n",
I'm finally returning to this 14 month old thread:
(was: Re: BUG #16045: vacuum_db crash and illegal memory alloc after pg_upgrade from PG11 to PG12)
On Tue, Oct 15, 2019 at 09:07:25AM +0200, Tomas Vondra wrote:
On Mon, Oct 14, 2019 at 11:41:18PM -0500, Justin Pryzby wrote:
Perhaps it'd be worth creating a test for on-disk format ?
Like a table with a column for each core type, which is either SELECTed from
after pg_upgrade, or pg_dump output compared before and after.IMO that would be useful - we now have a couple of these checks for
different data types (line, unknown, sql_identifier), with a couple of
combinations each. And I've been looking if we do similar pg_upgrade
tests, but I haven't found anything. I mean, we do pg_upgrade the
cluster used for regression tests, but here we need to test a number of
cases that are meant to abort the pg_upgrade. So we'd need a number of
pg_upgrade runs, to test that.
I meant to notice if the binary format is accidentally changed again, which was
what happened here:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
I added a table to the regression tests so it's processed by pg_upgrade tests,
run like:
| time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin
I checked that if I cherry-pick 0002 to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.
These all seem to complicate use of pg_upgrade/test.sh, so 0001 is needed to
allow testing upgrade from older releases.
e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress
40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.
fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.
c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA
da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions
--
Justin
Attachments:
0001-WIP-pg_upgrade-test.sh-changes-needed-to-allow-testi.patchtext/x-diff; charset=us-asciiDownload
From e9d70f3d043211011f7c7774ed2ed5eaee3760dc Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 22:31:19 -0600
Subject: [PATCH 1/2] WIP: pg_upgrade/test.sh: changes needed to allow testing
upgrade from v11
---
src/bin/pg_upgrade/check.c | 2 +-
src/bin/pg_upgrade/test.sh | 44 ++++++++++++++++++++++++++++++++------
2 files changed, 39 insertions(+), 7 deletions(-)
diff --git a/src/bin/pg_upgrade/check.c b/src/bin/pg_upgrade/check.c
index 357997972b..6dfe3cff65 100644
--- a/src/bin/pg_upgrade/check.c
+++ b/src/bin/pg_upgrade/check.c
@@ -122,7 +122,7 @@ check_and_dump_old_cluster(bool live_check)
* to prevent upgrade when used in user objects (tables, indexes, ...).
*/
if (GET_MAJOR_VERSION(old_cluster.major_version) <= 1100)
- old_11_check_for_sql_identifier_data_type_usage(&old_cluster);
+ ; // old_11_check_for_sql_identifier_data_type_usage(&old_cluster);
/*
* Pre-PG 10 allowed tables with 'unknown' type columns and non WAL logged
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 04aa7fd9f5..b39265f66d 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,7 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ "$1" -N -A trust
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -108,6 +108,9 @@ export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
mkdir "$outputdir"/testtablespace
+mkdir "$outputdir"/sql
+mkdir "$outputdir"/expected
+
logdir=`pwd`/log
rm -rf "$logdir"
mkdir "$logdir"
@@ -175,13 +178,36 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
fix_sql="DROP FUNCTION public.myfunc(integer); DROP FUNCTION public.oldstyle_length(integer, text);"
;;
*)
- fix_sql="DROP FUNCTION public.oldstyle_length(integer, text);"
+ fix_sql="DROP FUNCTION IF EXISTS public.oldstyle_length(integer, text);"
+
+ # commit 1ed6b8956
+ fix_sql="$fix_sql DROP OPERATOR public.#@# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.#%# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.!=- (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);"
+
+ # commit 76f412ab3
+ fix_sql="$fix_sql DROP OPERATOR IF EXISTS @#@(bigint,NONE);"
+ fix_sql="$fix_sql DROP OPERATOR IF EXISTS @#@(NONE,bigint);"
+
+ # commit 9e38c2bb5 and 97f73a978
+ fix_sql="$fix_sql DROP AGGREGATE IF EXISTS array_larger_accum (anyarray);"
+ fix_sql="$fix_sql DROP AGGREGATE IF EXISTS array_cat_accum(anyarray);"
+ fix_sql="$fix_sql DROP AGGREGATE IF EXISTS first_el_agg_any(anyelement);"
+
+ # commit 578b22971
+ fix_sql="$fix_sql ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ fix_sql="$fix_sql ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ fix_sql="$fix_sql ALTER TABLE public.emp SET WITHOUT OIDS;"
+ fix_sql="$fix_sql ALTER TABLE public.tt7 SET WITHOUT OIDS;"
;;
esac
+
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
@@ -227,23 +253,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "
# Windows hosts don't support Unix-y permissions.
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type f -perm /127 -ls`
+ if [ -n "$x" ]; then
echo "files in PGDATA with permission != 640";
+ echo "$x" |head
exit 1;
fi ;;
esac
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type d -perm 027 -ls`
+ if [ "$x" ]; then
echo "directories in PGDATA with permission != 750";
+ echo "$x" |head
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
--
2.17.0
0002-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 7cdcf46561948a2011a88945bac7d05cb1f13baa Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH 2/2] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 42 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 42 ++++++++++++++++++++++
3 files changed, 85 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index 192445878d..aa0a4fd9be 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index 274130e706..2feefc7224 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -631,3 +631,45 @@ WHERE pronargs != 2
----------+------------+---------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'<foo>bar</foo>'::xml,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+-- 'pg_class'::regclass, 'english'::regconfig, 'simple'::regdictionary, 'pg_catalog'::regnamespace,
+-- -- 'POSIX'::regcollation,
+-- -- '+'::regoper,
+-- '*(integer,integer)'::regoperator,
+-- -- 'sum'::regproc,
+-- 'sum(int4)'::regprocedure, USER::regrole, 'regtype'::regtype type,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+AND NOT typname~'_|^char$|^reg'
+AND oid != ALL(ARRAY['gtsvector', 'regcollation', 'regoper', 'regproc']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attrelid='manytypes'::regclass)
+ORDER BY 1,2,3,4;
+ typname | typtype | typelem | typarray | typarray
+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 4b492ce062..f2b490a9c6 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -467,3 +467,45 @@ FROM pg_range p1 JOIN pg_proc p ON p.oid = p1.rngsubdiff
WHERE pronargs != 2
OR proargtypes[0] != rngsubtype OR proargtypes[1] != rngsubtype
OR prorettype != 'pg_catalog.float8'::regtype;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'<foo>bar</foo>'::xml,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+-- 'pg_class'::regclass, 'english'::regconfig, 'simple'::regdictionary, 'pg_catalog'::regnamespace,
+-- -- 'POSIX'::regcollation,
+-- -- '+'::regoper,
+-- '*(integer,integer)'::regoperator,
+-- -- 'sum'::regproc,
+-- 'sum(int4)'::regprocedure, USER::regrole, 'regtype'::regtype type,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+AND NOT typname~'_|^char$|^reg'
+AND oid != ALL(ARRAY['gtsvector', 'regcollation', 'regoper', 'regproc']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attrelid='manytypes'::regclass)
+ORDER BY 1,2,3,4;
--
2.17.0
On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:
I meant to notice if the binary format is accidentally changed again, which was
what happened here:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.I added a table to the regression tests so it's processed by pg_upgrade tests,
run like:| time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin
Per cfbot, this avoids testing ::xml (support for which may not be enabled)
And also now tests oid types.
I think the per-version hacks should be grouped by logical change, rather than
by version. Which I've started doing here.
--
Justin
Attachments:
v2-0001-WIP-pg_upgrade-test.sh-changes-needed-to-allow-te.patchtext/x-diff; charset=us-asciiDownload
From 6a5bcdf6b3c9244e164455792cec612e317cb8d3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 22:31:19 -0600
Subject: [PATCH v2 1/3] WIP: pg_upgrade/test.sh: changes needed to allow
testing upgrade from v11
---
src/bin/pg_upgrade/test.sh | 92 ++++++++++++++++++++++++++++++++++----
1 file changed, 84 insertions(+), 8 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 04aa7fd9f5..9733217535 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,7 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ "$1" -N -A trust
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -108,6 +108,9 @@ export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
mkdir "$outputdir"/testtablespace
+mkdir "$outputdir"/sql
+mkdir "$outputdir"/expected
+
logdir=`pwd`/log
rm -rf "$logdir"
mkdir "$logdir"
@@ -172,16 +175,83 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
fix_sql=""
case $oldpgversion in
804??)
- fix_sql="DROP FUNCTION public.myfunc(integer); DROP FUNCTION public.oldstyle_length(integer, text);"
+ fix_sql="$fix_sql DROP FUNCTION public.myfunc(integer);"
;;
- *)
- fix_sql="DROP FUNCTION public.oldstyle_length(integer, text);"
+ esac
+
+ # Removed in v10 commit 5ded4bd21
+ case $oldpgversion in
+ 804??|9????)
+ fix_sql="$fix_sql DROP FUNCTION public.oldstyle_length(integer, text);"
+ ;;
+ esac
+
+ # commit 068503c76511cdb0080bab689662a20e86b9c845
+ case $oldpgversion in
+ 10????) # XXX
+ fix_sql="$fix_sql DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
+ ;;
+ esac
+
+ # commit db3af9feb19f39827e916145f88fa5eca3130cb2
+ case $oldpgversion in
+ 10????) # XXX
+ fix_sql="$fix_sql DROP FUNCTION boxarea(box);"
+ fix_sql="$fix_sql DROP FUNCTION funny_dup17();"
;;
esac
+
+ # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+ case $oldpgversion in
+ 10????) # XXX
+ fix_sql="$fix_sql DROP TABLE abstime_tbl;"
+ fix_sql="$fix_sql DROP TABLE reltime_tbl;"
+ fix_sql="$fix_sql DROP TABLE tinterval_tbl;"
+ ;;
+ esac
+
+ # Various things removed for v14
+ case $oldpgversion in
+ 804??|9????|10????|11????|12????|13????)
+ # commit 76f412ab3
+ # This one is only needed for v11+ ??
+ # (see below for more operators removed that also apply to older versions)
+ fix_sql="$fix_sql DROP OPERATOR public.!=- (pg_catalog.int8, NONE);"
+ ;;
+ esac
+ case $oldpgversion in
+ 804??|9????|10????|11????|12????|13????)
+ # commit 76f412ab3
+ fix_sql="$fix_sql DROP OPERATOR public.#@# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.#%# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);"
+
+ # commit 9e38c2bb5 and 97f73a978
+ # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
+ fix_sql="$fix_sql DROP AGGREGATE array_cat_accum(anyarray);"
+ fix_sql="$fix_sql DROP AGGREGATE first_el_agg_any(anyelement);"
+
+ # commit 76f412ab3
+ #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
+ fix_sql="$fix_sql DROP OPERATOR @#@(NONE,bigint);"
+ ;;
+ esac
+
+ # commit 578b22971: OIDS removed in v12
+ case $oldpgversion in
+ 804??|9????|10????|11????)
+ fix_sql="$fix_sql ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ fix_sql="$fix_sql ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ fix_sql="$fix_sql ALTER TABLE public.emp SET WITHOUT OIDS;"
+ fix_sql="$fix_sql ALTER TABLE public.tt7 SET WITHOUT OIDS;"
+ ;;
+ esac
+
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
@@ -227,23 +297,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "
# Windows hosts don't support Unix-y permissions.
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type f -perm /127 -ls`
+ if [ -n "$x" ]; then
echo "files in PGDATA with permission != 640";
+ echo "$x" |head
exit 1;
fi ;;
esac
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type d -perm /027 -ls`
+ if [ "$x" ]; then
echo "directories in PGDATA with permission != 750";
+ echo "$x" |head
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
--
2.17.0
v2-0002-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 6feef4f0ef4637a2daca7114e7a5c2df687d738d Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH v2 2/3] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 38 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 37 +++++++++++++++++++++
3 files changed, 76 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index 192445878d..aa0a4fd9be 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index 13567ddf84..97cf72bf78 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -660,3 +660,41 @@ WHERE pronargs != 2
----------+------------+---------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+-- reg* cannot be pg_upgraded
+AND NOT typname~'_|^char$|^reg'
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
+ typname | typtype | typelem | typarray | typarray
+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 8c6e614f20..e3012b0888 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -489,3 +489,40 @@ FROM pg_range p1 JOIN pg_proc p ON p.oid = p1.rngsubdiff
WHERE pronargs != 2
OR proargtypes[0] != rngsubtype OR proargtypes[1] != rngsubtype
OR prorettype != 'pg_catalog.float8'::regtype;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+-- reg* cannot be pg_upgraded
+AND NOT typname~'_|^char$|^reg'
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
--
2.17.0
On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote:
On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:
I meant to notice if the binary format is accidentally changed again, which was
what happened here:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.I added a table to the regression tests so it's processed by pg_upgrade tests,
run like:| time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin
Per cfbot, this avoids testing ::xml (support for which may not be enabled)
And also now tests oid types.I think the per-version hacks should be grouped by logical change, rather than
by version. Which I've started doing here.
rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f
--
Justin
Attachments:
v3-0001-WIP-pg_upgrade-test.sh-changes-needed-to-allow-te.patchtext/x-diff; charset=us-asciiDownload
From a1114c3db36891f169122383db136fd8fb47cb10 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 22:31:19 -0600
Subject: [PATCH v3 1/2] WIP: pg_upgrade/test.sh: changes needed to allow
testing upgrade from v11
---
src/bin/pg_upgrade/test.sh | 92 ++++++++++++++++++++++++++++++++++----
1 file changed, 84 insertions(+), 8 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 04aa7fd9f5..9733217535 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,7 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ "$1" -N -A trust
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -108,6 +108,9 @@ export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
mkdir "$outputdir"/testtablespace
+mkdir "$outputdir"/sql
+mkdir "$outputdir"/expected
+
logdir=`pwd`/log
rm -rf "$logdir"
mkdir "$logdir"
@@ -172,16 +175,83 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
fix_sql=""
case $oldpgversion in
804??)
- fix_sql="DROP FUNCTION public.myfunc(integer); DROP FUNCTION public.oldstyle_length(integer, text);"
+ fix_sql="$fix_sql DROP FUNCTION public.myfunc(integer);"
;;
- *)
- fix_sql="DROP FUNCTION public.oldstyle_length(integer, text);"
+ esac
+
+ # Removed in v10 commit 5ded4bd21
+ case $oldpgversion in
+ 804??|9????)
+ fix_sql="$fix_sql DROP FUNCTION public.oldstyle_length(integer, text);"
+ ;;
+ esac
+
+ # commit 068503c76511cdb0080bab689662a20e86b9c845
+ case $oldpgversion in
+ 10????) # XXX
+ fix_sql="$fix_sql DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
+ ;;
+ esac
+
+ # commit db3af9feb19f39827e916145f88fa5eca3130cb2
+ case $oldpgversion in
+ 10????) # XXX
+ fix_sql="$fix_sql DROP FUNCTION boxarea(box);"
+ fix_sql="$fix_sql DROP FUNCTION funny_dup17();"
;;
esac
+
+ # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+ case $oldpgversion in
+ 10????) # XXX
+ fix_sql="$fix_sql DROP TABLE abstime_tbl;"
+ fix_sql="$fix_sql DROP TABLE reltime_tbl;"
+ fix_sql="$fix_sql DROP TABLE tinterval_tbl;"
+ ;;
+ esac
+
+ # Various things removed for v14
+ case $oldpgversion in
+ 804??|9????|10????|11????|12????|13????)
+ # commit 76f412ab3
+ # This one is only needed for v11+ ??
+ # (see below for more operators removed that also apply to older versions)
+ fix_sql="$fix_sql DROP OPERATOR public.!=- (pg_catalog.int8, NONE);"
+ ;;
+ esac
+ case $oldpgversion in
+ 804??|9????|10????|11????|12????|13????)
+ # commit 76f412ab3
+ fix_sql="$fix_sql DROP OPERATOR public.#@# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.#%# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);"
+
+ # commit 9e38c2bb5 and 97f73a978
+ # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
+ fix_sql="$fix_sql DROP AGGREGATE array_cat_accum(anyarray);"
+ fix_sql="$fix_sql DROP AGGREGATE first_el_agg_any(anyelement);"
+
+ # commit 76f412ab3
+ #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
+ fix_sql="$fix_sql DROP OPERATOR @#@(NONE,bigint);"
+ ;;
+ esac
+
+ # commit 578b22971: OIDS removed in v12
+ case $oldpgversion in
+ 804??|9????|10????|11????)
+ fix_sql="$fix_sql ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ fix_sql="$fix_sql ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ fix_sql="$fix_sql ALTER TABLE public.emp SET WITHOUT OIDS;"
+ fix_sql="$fix_sql ALTER TABLE public.tt7 SET WITHOUT OIDS;"
+ ;;
+ esac
+
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
@@ -227,23 +297,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "
# Windows hosts don't support Unix-y permissions.
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type f -perm /127 -ls`
+ if [ -n "$x" ]; then
echo "files in PGDATA with permission != 640";
+ echo "$x" |head
exit 1;
fi ;;
esac
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type d -perm /027 -ls`
+ if [ "$x" ]; then
echo "directories in PGDATA with permission != 750";
+ echo "$x" |head
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
--
2.17.0
v3-0002-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 40269cc927cbb723256b1e6995f7aeefcad6ceac Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH v3 2/2] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 38 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 37 +++++++++++++++++++++
3 files changed, 76 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index d9ce961be2..f67e3853ff 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index 0c74dc96a8..f3476bbf10 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -672,3 +672,41 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
----------+------------+---------------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+-- reg* cannot be pg_upgraded
+AND NOT typname~'_|^char$|^reg'
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
+ typname | typtype | typelem | typarray | typarray
+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 4739aca84a..ef08e5d010 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -495,3 +495,40 @@ WHERE pronargs != 2
SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid
FROM pg_range p1
WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+-- reg* cannot be pg_upgraded
+AND NOT typname~'_|^char$|^reg'
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
--
2.17.0
On 2020-12-27 20:07, Justin Pryzby wrote:
On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote:
On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:
I meant to notice if the binary format is accidentally changed again, which was
what happened here:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.I added a table to the regression tests so it's processed by pg_upgrade tests,
run like:| time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin
Per cfbot, this avoids testing ::xml (support for which may not be enabled)
And also now tests oid types.I think the per-version hacks should be grouped by logical change, rather than
by version. Which I've started doing here.rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f
I think these patches could use some in-place documentation of what they
are trying to achieve and how they do it. The required information is
spread over a lengthy thread. No one wants to read that. Add commit
messages to the patches.
On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:
On 2020-12-27 20:07, Justin Pryzby wrote:
rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f
I think these patches could use some in-place documentation of what they are
trying to achieve and how they do it. The required information is spread
over a lengthy thread. No one wants to read that. Add commit messages to
the patches.
Oh, I see that now, and agree that you need to explain each item with a
comment. pg_upgrade is doing some odd things, so documenting everything
it does is a big win.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EnterpriseDB https://enterprisedb.com
The usefulness of a cup is in its emptiness, Bruce Lee
On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:
On 2020-12-27 20:07, Justin Pryzby wrote:
On Wed, Dec 16, 2020 at 11:22:23AM -0600, Justin Pryzby wrote:
On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:
I meant to notice if the binary format is accidentally changed again, which was
what happened here:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.I added a table to the regression tests so it's processed by pg_upgrade tests,
run like:| time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin
Per cfbot, this avoids testing ::xml (support for which may not be enabled)
And also now tests oid types.I think the per-version hacks should be grouped by logical change, rather than
by version. Which I've started doing here.rebased on 6df7a9698bb036610c1e8c6d375e1be38cb26d5f
I think these patches could use some in-place documentation of what they are
trying to achieve and how they do it. The required information is spread
over a lengthy thread. No one wants to read that. Add commit messages to
the patches.
0001 patch fixes pg_upgrade/test.sh, which was disfunctional.
Portions of the first patch were independently handled by commits 52202bb39,
fa744697c, 091866724. So this is rebased on those.
I guess updating this script should be a part of a beta-checklist somewhere,
since I guess nobody will want to backpatch changes for testing older releases.
0002 allows detecting the information_schema problem that was introduced at:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
If binary compatibility is changed I expect this will error, crash, at least
return wrong data, and thereby fail tests.
--
Justin
Show quoted text
On Sun, Dec 06, 2020 at 12:02:48PM -0600, Justin Pryzby wrote:
I checked that if I cherry-pick 0002 to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.These all seem to complicate use of pg_upgrade/test.sh, so 0001 is needed to
allow testing upgrade from older releases.e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress
40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.
fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.
c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA
da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions
Attachments:
v4-0001-WIP-pg_upgrade-test.sh-changes-needed-to-allow-te.patchtext/x-diff; charset=us-asciiDownload
From b3f829ab0fd880962d43eac0222bdaab2b8070f4 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 22:31:19 -0600
Subject: [PATCH v4 1/3] WIP: pg_upgrade/test.sh: changes needed to allow
testing upgrade to v14dev from v9.5-v13
---
src/bin/pg_upgrade/test.sh | 93 +++++++++++++++++++++++++++++++++++---
1 file changed, 86 insertions(+), 7 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index ca923ba01b..b36fca4233 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -177,18 +177,97 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
esac
fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text); -- last in 9.6
+ public.oldstyle_length(integer, text);" # last in 9.6 -- commit 5ded4bd21
+ fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
- public.putenv(text); -- last in v13
- DROP OPERATOR IF EXISTS -- last in v13
- public.#@# (pg_catalog.int8, NONE),
- public.#%# (pg_catalog.int8, NONE),
- public.!=- (pg_catalog.int8, NONE),
+ public.putenv(text);" # last in v13
+ # last in v13 commit 76f412ab3
+ # public.!=- This one is only needed for v11+ ??
+ # Note, until v10, operators could only be dropped one at a time
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.#@# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.#%# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.!=- (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
public.#@%# (pg_catalog.int8, NONE);"
+
+ # commit 068503c76511cdb0080bab689662a20e86b9c845
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
+ ;;
+ esac
+
+ # commit db3af9feb19f39827e916145f88fa5eca3130cb2
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP FUNCTION boxarea(box);"
+ fix_sql="$fix_sql
+ DROP FUNCTION funny_dup17();"
+ ;;
+ esac
+
+ # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP TABLE abstime_tbl;"
+ fix_sql="$fix_sql
+ DROP TABLE reltime_tbl;"
+ fix_sql="$fix_sql
+ DROP TABLE tinterval_tbl;"
+ ;;
+ esac
+
+ # Various things removed for v14
+ case $oldpgversion in
+ 906??|10????|11????|12????|13????)
+ fix_sql="$fix_sql
+ DROP AGGREGATE first_el_agg_any(anyelement);"
+ ;;
+ esac
+ case $oldpgversion in
+ 90[56]??|10????|11????|12????|13????)
+ # commit 9e38c2bb5 and 97f73a978
+ # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
+ fix_sql="$fix_sql
+ DROP AGGREGATE array_cat_accum(anyarray);"
+
+ # commit 76f412ab3
+ #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR @#@(NONE,bigint);"
+ ;;
+ esac
+
+ # commit 578b22971: OIDS removed in v12
+ case $oldpgversion in
+ 804??|9????|10????|11????)
+ fix_sql="$fix_sql
+ ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ fix_sql="$fix_sql
+ ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ fix_sql="$fix_sql
+ ALTER TABLE public.emp SET WITHOUT OIDS;"
+ fix_sql="$fix_sql
+ ALTER TABLE public.tt7 SET WITHOUT OIDS;"
+ ;;
+ esac
+
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ echo "fix_sql: $oldpgversion: $fix_sql" >&2
+ pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
--
2.17.0
v4-0002-More-changes-needed-to-allow-upgrade-testing.patchtext/x-diff; charset=us-asciiDownload
From 093a976220a6bdbca13a17e0b2c0d6256b2b74fa Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Mon, 11 Jan 2021 21:41:16 -0600
Subject: [PATCH v4 2/3] More changes needed to allow upgrade testing:
These all seem to complicate use of pg_upgrade/test.sh:
e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress
40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.
fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.
c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA
da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions
---
src/bin/pg_upgrade/test.sh | 17 +++++++++++++----
1 file changed, 13 insertions(+), 4 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index b36fca4233..ab45801c35 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,7 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ "$1" -N -A trust
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -108,6 +108,9 @@ export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
mkdir "$outputdir"/testtablespace
+mkdir "$outputdir"/sql
+mkdir "$outputdir"/expected
+
logdir=`pwd`/log
rm -rf "$logdir"
mkdir "$logdir"
@@ -313,23 +316,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "
# Windows hosts don't support Unix-y permissions.
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type f -perm /127 -ls`
+ if [ -n "$x" ]; then
echo "files in PGDATA with permission != 640";
+ echo "$x" |head
exit 1;
fi ;;
esac
case $testhost in
MINGW*) ;;
- *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
+ *)
+ x=`find "$PGDATA" -type d -perm /027 -ls`
+ if [ "$x" ]; then
echo "directories in PGDATA with permission != 750";
+ echo "$x" |head
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
--
2.17.0
v4-0003-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 91fe77ad5501e0feb26067f369077715565c7ced Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH v4 3/3] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes to notice if the
binary format is accidentally changed again, as happened at:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
I checked that if I cherry-pick to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 39 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 38 +++++++++++++++++++++
3 files changed, 78 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index d9ce961be2..f67e3853ff 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index 0c74dc96a8..598a39ae03 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -672,3 +672,42 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
----------+------------+---------------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+-- reg* cannot be pg_upgraded
+AND NOT typname~'_|^char$|^reg'
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
+ typname | typtype | typelem | typarray | typarray
+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 4739aca84a..1df9859118 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -495,3 +495,41 @@ WHERE pronargs != 2
SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid
FROM pg_range p1
WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no;
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table (or some other non-catalog table processed by pg_upgrade).
+SELECT typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typnamespace IN ('pg_catalog'::regnamespace, 'information_schema'::regnamespace)
+AND typtype IN ('b', 'e', 'd')
+-- reg* cannot be pg_upgraded
+AND NOT typname~'_|^char$|^reg'
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['gtsvector', 'xml']::regtype[])
+AND NOT EXISTS (SELECT * FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
--
2.17.0
On Mon, Jan 11, 2021 at 10:13:52PM -0600, Justin Pryzby wrote:
On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:
I think these patches could use some in-place documentation of what they are
trying to achieve and how they do it. The required information is spread
over a lengthy thread. No one wants to read that. Add commit messages to
the patches.0001 patch fixes pg_upgrade/test.sh, which was disfunctional.
Portions of the first patch were independently handled by commits 52202bb39,
fa744697c, 091866724. So this is rebased on those.
I guess updating this script should be a part of a beta-checklist somewhere,
since I guess nobody will want to backpatch changes for testing older releases.
Uh, what exactly is missing from the beta checklist? I read the patch
and commit message but don't understand it.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EnterpriseDB https://enterprisedb.com
The usefulness of a cup is in its emptiness, Bruce Lee
On Tue, Jan 12, 2021 at 12:15:59PM -0500, Bruce Momjian wrote:
On Mon, Jan 11, 2021 at 10:13:52PM -0600, Justin Pryzby wrote:
On Mon, Jan 11, 2021 at 03:28:08PM +0100, Peter Eisentraut wrote:
I think these patches could use some in-place documentation of what they are
trying to achieve and how they do it. The required information is spread
over a lengthy thread. No one wants to read that. Add commit messages to
the patches.0001 patch fixes pg_upgrade/test.sh, which was disfunctional.
Portions of the first patch were independently handled by commits 52202bb39,
fa744697c, 091866724. So this is rebased on those.
I guess updating this script should be a part of a beta-checklist somewhere,
since I guess nobody will want to backpatch changes for testing older releases.Uh, what exactly is missing from the beta checklist? I read the patch
and commit message but don't understand it.
Did you try to use test.sh to upgrade from a prior release ?
Evidently it's frequently forgotten, as evidenced by all the "deferred
maintenance" I had to do to allow testing the main patch (currently 0003).
See also:
commit 5bab1985dfc25eecf4b098145789955c0b246160
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Thu Jun 8 13:48:27 2017 -0400
Fix bit-rot in pg_upgrade's test.sh, and improve documentation.
Doing a cross-version upgrade test with test.sh evidently hasn't been
tested since circa 9.2, because the script lacked case branches for
old-version servers newer than 9.1. Future-proof that a bit, and
clean up breakage induced by our recent drop of V0 function call
protocol (namely that oldstyle_length() isn't in the regression
suite anymore).
--
Justin
On Tue, Jan 12, 2021 at 11:27:53AM -0600, Justin Pryzby wrote:
On Tue, Jan 12, 2021 at 12:15:59PM -0500, Bruce Momjian wrote:
Uh, what exactly is missing from the beta checklist? I read the patch
and commit message but don't understand it.Did you try to use test.sh to upgrade from a prior release ?
Evidently it's frequently forgotten, as evidenced by all the "deferred
maintenance" I had to do to allow testing the main patch (currently 0003).See also:
commit 5bab1985dfc25eecf4b098145789955c0b246160
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Thu Jun 8 13:48:27 2017 -0400Fix bit-rot in pg_upgrade's test.sh, and improve documentation.
Doing a cross-version upgrade test with test.sh evidently hasn't been
tested since circa 9.2, because the script lacked case branches for
old-version servers newer than 9.1. Future-proof that a bit, and
clean up breakage induced by our recent drop of V0 function call
protocol (namely that oldstyle_length() isn't in the regression
suite anymore).
Oh, that is odd. I thought that was regularly run. I have my own test
infrastructure that I run for every major release so I never have run
the built-in one, except for make check-world.
--
Bruce Momjian <bruce@momjian.us> https://momjian.us
EnterpriseDB https://enterprisedb.com
The usefulness of a cup is in its emptiness, Bruce Lee
On 1/12/21 12:53 PM, Bruce Momjian wrote:
On Tue, Jan 12, 2021 at 11:27:53AM -0600, Justin Pryzby wrote:
On Tue, Jan 12, 2021 at 12:15:59PM -0500, Bruce Momjian wrote:
Uh, what exactly is missing from the beta checklist? I read the patch
and commit message but don't understand it.Did you try to use test.sh to upgrade from a prior release ?
Evidently it's frequently forgotten, as evidenced by all the "deferred
maintenance" I had to do to allow testing the main patch (currently 0003).See also:
commit 5bab1985dfc25eecf4b098145789955c0b246160
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Thu Jun 8 13:48:27 2017 -0400Fix bit-rot in pg_upgrade's test.sh, and improve documentation.
Doing a cross-version upgrade test with test.sh evidently hasn't been
tested since circa 9.2, because the script lacked case branches for
old-version servers newer than 9.1. Future-proof that a bit, and
clean up breakage induced by our recent drop of V0 function call
protocol (namely that oldstyle_length() isn't in the regression
suite anymore).Oh, that is odd. I thought that was regularly run. I have my own test
infrastructure that I run for every major release so I never have run
the built-in one, except for make check-world.
Cross version pg_upgrade is tested regularly in the buildfarm, but not
using test.sh. Instead it uses the saved data repository from a previous
run of the buildfarm client for the source branch, and tries to upgrade
that to the target branch.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 2021-01-12 22:44, Andrew Dunstan wrote:
Cross version pg_upgrade is tested regularly in the buildfarm, but not
using test.sh. Instead it uses the saved data repository from a previous
run of the buildfarm client for the source branch, and tries to upgrade
that to the target branch.
Does it maintain a set of fixups similar to what is in test.sh? Are
those two sets the same?
Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:
On 2021-01-12 22:44, Andrew Dunstan wrote:
Cross version pg_upgrade is tested regularly in the buildfarm, but not
using test.sh. Instead it uses the saved data repository from a previous
run of the buildfarm client for the source branch, and tries to upgrade
that to the target branch.
Does it maintain a set of fixups similar to what is in test.sh? Are
those two sets the same?
Responding to Peter: the first answer is yes, the second is I didn't
check, but certainly Justin's patch makes them closer.
I spent some time poking through this set of patches. I agree that
there's problem(s) here that we need to solve, but it feels like this
isn't a great way to solve them. What I see in the patchset is:
v4-0001 mostly teaches test.sh about specific changes that have to be
made to historic versions of the regression database to allow them
to be reloaded into current servers. As already discussed, this is
really duplicative of knowledge that's been embedded into the buildfarm
client over time. It'd be better if we could refactor that so that
the buildfarm shares a common database of these actions with test.sh.
And said database ought to be in our git tree, so committers could
fix problems without having to get Andrew involved every time.
I think this could be represented as a psql script, at least in
versions that have psql \if (but that came in in v10, so maybe
we're there already).
(Taking a step back, maybe the regression database isn't an ideal
testbed for this in the first place. But it does have the advantage of
not being a narrow-minded test that is going to miss things we haven't
explicitly thought of.)
v4-0002 is a bunch of random changes that mostly seem to revert hacky
adjustments previously made to improve test coverage. I don't really
agree with any of these, nor see why they're necessary. If they
are necessary then we need to restore the coverage somewhere else.
Admittedly, the previous changes were a bit hacky, but deleting them
(without even bothering to adjust the relevant comments) isn't the
answer.
v4-0003 is really the heart of the matter: it adds a table with some
previously-not-covered datatypes plus a query that purports to make sure
that we are covering all types of interest. But I'm not sure I believe
that query. It's got hard-wired assumptions about which typtype values
need to be covered. Why is it okay to exclude range and multirange?
Are we sure that all composites are okay to exclude? Likewise, the
restriction to pg_catalog and information_schema schemas seems likely to
bite us someday. There are some very random exclusions based on name
patterns, which seem unsafe (let's list the specific type OIDs), and
again the nearby comments don't match the code. But the biggest issue
is that this can only cover core datatypes, not any contrib stuff.
I don't know what we could do about contrib types. Maybe we should
figure that covering core types is already a step forward, and be
happy with getting that done.
regards, tom lane
On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:
Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:
On 2021-01-12 22:44, Andrew Dunstan wrote:
Cross version pg_upgrade is tested regularly in the buildfarm, but not
using test.sh. Instead it uses the saved data repository from a previous
run of the buildfarm client for the source branch, and tries to upgrade
that to the target branch.Does it maintain a set of fixups similar to what is in test.sh? Are
those two sets the same?Responding to Peter: the first answer is yes, the second is I didn't
check, but certainly Justin's patch makes them closer.
Right - I had meant to send this.
https://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm
$opsql = 'drop operator if exists public.=> (bigint, NONE)';
..
my $missing_funcs = q{drop function if exists public.boxarea(box);
drop function if exists public.funny_dup17();
..
my $prstmt = join(';',
'drop operator if exists #@# (bigint,NONE)',
'drop operator if exists #%# (bigint,NONE)',
'drop operator if exists !=- (bigint,NONE)',
..
$prstmt = join(';',
'drop operator @#@ (NONE, bigint)',
..
'drop aggregate if exists public.array_cat_accum(anyarray)',
I spent some time poking through this set of patches. I agree that
there's problem(s) here that we need to solve, but it feels like this
isn't a great way to solve them. What I see in the patchset is:
For starters, is there a "release beta checklist" ?
Testing test.sh should be on it.
So should fuzz testing.
v4-0001 mostly teaches test.sh about specific changes that have to be
made to historic versions of the regression database to allow them
to be reloaded into current servers. As already discussed, this is
really duplicative of knowledge that's been embedded into the buildfarm
client over time. It'd be better if we could refactor that so that
the buildfarm shares a common database of these actions with test.sh.
And said database ought to be in our git tree, so committers could
fix problems without having to get Andrew involved every time.
I think this could be represented as a psql script, at least in
versions that have psql \if (but that came in in v10, so maybe
we're there already).
I started this. I don't know if it's compatible with the buildfarm client, but
I think any issues maybe can be avoided by using "IF EXISTS".
v4-0002 is a bunch of random changes that mostly seem to revert hacky
adjustments previously made to improve test coverage. I don't really
agree with any of these, nor see why they're necessary. If they
are necessary then we need to restore the coverage somewhere else.
Admittedly, the previous changes were a bit hacky, but deleting them
(without even bothering to adjust the relevant comments) isn't the
answer.
It was necessary to avoid --wal-segsize and -g to allow testing upgrades from
versions which don't support those options. I think test.sh should be portable
back to all supported versions.
When those options were added, it broke test.sh upgrading from old versions.
I changed this to a shell conditional for the "new" features:
| "$1" -N -A trust ${oldsrc:+--wal-segsize 1 -g}
Ideally it would check the version.
v4-0003 is really the heart of the matter: it adds a table with some
previously-not-covered datatypes plus a query that purports to make sure
that we are covering all types of interest.
Actually the 'manytypes' table intends to include *all* core datatypes itself,
not just those that aren't included somewhere else. I think "included
somewhere else" depends on the order of the regression these, and type_sanity
runs early, so the table might need to include many types that are created
later, to avoid "false positives" in the associated test.
But I'm not sure I believe
that query. It's got hard-wired assumptions about which typtype values
need to be covered. Why is it okay to exclude range and multirange?
Are we sure that all composites are okay to exclude? Likewise, the
restriction to pg_catalog and information_schema schemas seems likely to
bite us someday. There are some very random exclusions based on name
patterns, which seem unsafe (let's list the specific type OIDs), and
again the nearby comments don't match the code. But the biggest issue
is that this can only cover core datatypes, not any contrib stuff.
I changed to use regtype/OIDs, included range/multirange and stopped including
only pg_catalog/information_schema. But didn't yet handle composites.
I don't know what we could do about contrib types. Maybe we should
figure that covering core types is already a step forward, and be
happy with getting that done.
Right .. this is meant to at least handle the lowest hanging fruit.
--
Justin
Attachments:
v5-0001-WIP-pg_upgrade-test.sh-changes-needed-to-allow-te.patchtext/x-diff; charset=us-asciiDownload
From 79bed0997a1c720f103100697bdaa0cb1ee1261d Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 22:31:19 -0600
Subject: [PATCH v5 1/4] WIP: pg_upgrade/test.sh: changes needed to allow
testing upgrade to v14dev from v9.5-v13
---
src/bin/pg_upgrade/test.sh | 93 +++++++++++++++++++++++++++++++++++---
1 file changed, 86 insertions(+), 7 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 1ba326decd..9288cfdda8 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -176,18 +176,97 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
esac
fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text); -- last in 9.6
+ public.oldstyle_length(integer, text);" # last in 9.6 -- commit 5ded4bd21
+ fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
- public.putenv(text); -- last in v13
- DROP OPERATOR IF EXISTS -- last in v13
- public.#@# (pg_catalog.int8, NONE),
- public.#%# (pg_catalog.int8, NONE),
- public.!=- (pg_catalog.int8, NONE),
+ public.putenv(text);" # last in v13
+ # last in v13 commit 76f412ab3
+ # public.!=- This one is only needed for v11+ ??
+ # Note, until v10, operators could only be dropped one at a time
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.#@# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.#%# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.!=- (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
public.#@%# (pg_catalog.int8, NONE);"
+
+ # commit 068503c76511cdb0080bab689662a20e86b9c845
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
+ ;;
+ esac
+
+ # commit db3af9feb19f39827e916145f88fa5eca3130cb2
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP FUNCTION boxarea(box);"
+ fix_sql="$fix_sql
+ DROP FUNCTION funny_dup17();"
+ ;;
+ esac
+
+ # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP TABLE abstime_tbl;"
+ fix_sql="$fix_sql
+ DROP TABLE reltime_tbl;"
+ fix_sql="$fix_sql
+ DROP TABLE tinterval_tbl;"
+ ;;
+ esac
+
+ # Various things removed for v14
+ case $oldpgversion in
+ 906??|10????|11????|12????|13????)
+ fix_sql="$fix_sql
+ DROP AGGREGATE first_el_agg_any(anyelement);"
+ ;;
+ esac
+ case $oldpgversion in
+ 90[56]??|10????|11????|12????|13????)
+ # commit 9e38c2bb5 and 97f73a978
+ # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
+ fix_sql="$fix_sql
+ DROP AGGREGATE array_cat_accum(anyarray);"
+
+ # commit 76f412ab3
+ #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR @#@(NONE,bigint);"
+ ;;
+ esac
+
+ # commit 578b22971: OIDS removed in v12
+ case $oldpgversion in
+ 804??|9????|10????|11????)
+ fix_sql="$fix_sql
+ ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ fix_sql="$fix_sql
+ ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ fix_sql="$fix_sql
+ ALTER TABLE public.emp SET WITHOUT OIDS;"
+ fix_sql="$fix_sql
+ ALTER TABLE public.tt7 SET WITHOUT OIDS;"
+ ;;
+ esac
+
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ echo "fix_sql: $oldpgversion: $fix_sql" >&2
+ pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
--
2.17.0
v5-0002-More-changes-needed-to-allow-upgrade-testing.patchtext/x-diff; charset=us-asciiDownload
From 8280518a897e3745ede459e31cc78c9e4b0efdbe Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Mon, 11 Jan 2021 21:41:16 -0600
Subject: [PATCH v5 2/4] More changes needed to allow upgrade testing:
These all seem to complicate use of pg_upgrade/test.sh:
e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress
40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.
fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.
c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA
da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions
---
src/bin/pg_upgrade/test.sh | 29 ++++++++++++++++++++++-------
1 file changed, 22 insertions(+), 7 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 9288cfdda8..74c29229ac 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,13 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ if [ -z "$oldsrc" ]
+ then
+ "$1" -N -A trust --wal-segsize 1 -g
+ else
+ "$1" -N -A trust
+ fi
+
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -106,6 +112,10 @@ outputdir="$temp_root/regress"
EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --outputdir=$outputdir"
export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
+mkdir "$outputdir"/testtablespace
+
+mkdir "$outputdir"/sql
+mkdir "$outputdir"/expected
logdir=`pwd`/log
rm -rf "$logdir"
@@ -311,24 +321,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "
# make sure all directories and files have group permissions, on Unix hosts
# Windows hosts don't support Unix-y permissions.
case $testhost in
- MINGW*|CYGWIN*) ;;
- *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
- echo "files in PGDATA with permission != 640";
+ *)
+ x=`find "$PGDATA" -type f ! -perm 600 ! -perm 640 -ls`
+ if [ -n "$x" ]; then
+ echo "files in PGDATA with permission NOT IN (600, 640)";
+ echo "$x" |head
exit 1;
fi ;;
esac
case $testhost in
MINGW*|CYGWIN*) ;;
- *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
- echo "directories in PGDATA with permission != 750";
+ *)
+ x=`find "$PGDATA" -type d ! -perm 700 ! -perm 750 -ls`
+ if [ "$x" ]; then
+ echo "directories in PGDATA with permission NOT IN (700, 750)";
+ echo "$x" |head
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
--
2.17.0
v5-0003-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 787dd9cd5a2088c8e545691bf1d532466c8311d6 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH v5 3/4] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes to notice if the
binary format is accidentally changed again, as happened at:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
I checked that if I cherry-pick to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 55 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 54 +++++++++++++++++++++
3 files changed, 110 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index d9ce961be2..f67e3853ff 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index 5480f979c6..9ae2922169 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -674,3 +674,58 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
----------+------------+---------------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'foo'::"char", 'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type, 'pg_monitor'::regrole,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, '16/B374D848'::pg_lsn,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no,
+'venus'::planets, 'i16'::insenum,
+'(1,2)'::int4range, '{(1,2)}'::int4multirange,
+'(3,4)'::int8range, '{(3,4)}'::int8multirange,
+'(1,2)'::float8range, '{(1,2)}'::float8multirange,
+'(3,4)'::numrange, '{(3,4)}'::nummultirange,
+'(a,b)'::textrange, '{(a,b)}'::textmultirange,
+'(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+'(2020-01-02, 2021-02-03)'::daterange,
+'{(2020-01-02, 2021-02-03)}'::datemultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+arrayrange(ARRAY[1,2], ARRAY[2,1]),
+arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
+ oid | typname | typtype | typelem | typarray | typarray
+-----+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 4739aca84a..668fe92778 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -495,3 +495,57 @@ WHERE pronargs != 2
SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid
FROM pg_range p1
WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'foo'::"char", 'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type, 'pg_monitor'::regrole,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, '16/B374D848'::pg_lsn,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no,
+'venus'::planets, 'i16'::insenum,
+'(1,2)'::int4range, '{(1,2)}'::int4multirange,
+'(3,4)'::int8range, '{(3,4)}'::int8multirange,
+'(1,2)'::float8range, '{(1,2)}'::float8multirange,
+'(3,4)'::numrange, '{(3,4)}'::nummultirange,
+'(a,b)'::textrange, '{(a,b)}'::textmultirange,
+'(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+'(2020-01-02, 2021-02-03)'::daterange,
+'{(2020-01-02, 2021-02-03)}'::datemultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+arrayrange(ARRAY[1,2], ARRAY[2,1]),
+arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
--
2.17.0
v5-0004-Move-pg_upgrade-kludges-to-sql-script.patchtext/x-diff; charset=us-asciiDownload
From 7190f3b6b19cd23d8a6b2eca2374ac1e92d94d18 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 6 Mar 2021 18:35:26 -0600
Subject: [PATCH v5 4/4] Move pg_upgrade kludges to sql script
---
src/bin/pg_upgrade/test-upgrade.sql | 89 ++++++++++++++++++++++++++
src/bin/pg_upgrade/test.sh | 96 +----------------------------
2 files changed, 90 insertions(+), 95 deletions(-)
create mode 100644 src/bin/pg_upgrade/test-upgrade.sql
diff --git a/src/bin/pg_upgrade/test-upgrade.sql b/src/bin/pg_upgrade/test-upgrade.sql
new file mode 100644
index 0000000000..3dadd6ea74
--- /dev/null
+++ b/src/bin/pg_upgrade/test-upgrade.sql
@@ -0,0 +1,89 @@
+-- This file has a bunch of kludges needed for upgrading testing across major versions
+
+SELECT
+ ver >= 804 AND ver <= 1100 AS fromv84v11,
+ ver >= 905 AND ver <= 1300 AS fromv95v13,
+ ver >= 906 AND ver <= 1300 AS fromv96v13,
+ ver <= 80400 AS fromv84,
+ ver <= 90500 AS fromv95,
+ ver <= 90600 AS fromv96,
+ ver <= 100000 AS fromv10,
+ ver <= 110000 AS fromv11,
+ ver <= 120000 AS fromv12,
+ ver <= 130000 AS fromv13
+ FROM (SELECT current_setting('server_version_num')::int/100 AS ver) AS v;
+\gset
+
+\if :fromv84
+DROP FUNCTION public.myfunc(integer);
+\endif
+
+-- last in 9.6 -- commit 5ded4bd21
+DROP FUNCTION IF EXISTS public.oldstyle_length(integer, text);
+DROP FUNCTION IF EXISTS public.putenv(text);
+
+\if :fromv13
+-- last in v13 commit 76f412ab3
+-- public.!=- This one is only needed for v11+ ??
+-- Note, until v10, operators could only be dropped one at a time
+DROP OPERATOR IF EXISTS public.#@# (pg_catalog.int8, NONE);
+DROP OPERATOR IF EXISTS public.#%# (pg_catalog.int8, NONE);
+DROP OPERATOR IF EXISTS public.!=- (pg_catalog.int8, NONE);
+DROP OPERATOR IF EXISTS public.#@%# (pg_catalog.int8, NONE);
+\endif
+
+\if :fromv10
+-- commit 068503c76511cdb0080bab689662a20e86b9c845
+DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;
+
+-- commit db3af9feb19f39827e916145f88fa5eca3130cb2
+DROP FUNCTION boxarea(box);
+DROP FUNCTION funny_dup17();
+
+-- commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+DROP TABLE abstime_tbl;
+DROP TABLE reltime_tbl;
+DROP TABLE tinterval_tbl;
+\endif
+
+\if :fromv96v13
+-- Various things removed for v14
+DROP AGGREGATE first_el_agg_any(anyelement);
+\endif
+
+\if :fromv95v13
+-- commit 9e38c2bb5 and 97f73a978
+-- DROP AGGREGATE array_larger_accum(anyarray);
+DROP AGGREGATE array_cat_accum(anyarray);
+
+-- commit 76f412ab3
+-- DROP OPERATOR @#@(bigint,NONE);
+DROP OPERATOR @#@(NONE,bigint);
+\endif
+
+-- \if :fromv84v11
+\if :fromv11
+-- commit 578b22971: OIDS removed in v12
+ALTER TABLE public.tenk1 SET WITHOUT OIDS;
+ALTER TABLE public.tenk1 SET WITHOUT OIDS;
+-- fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ALTER TABLE public.emp SET WITHOUT OIDS;
+ALTER TABLE public.tt7 SET WITHOUT OIDS;
+\endif
+
+-- if [ "$newsrc" != "$oldsrc" ]; then
+-- # update references to old source tree's regress.so etc
+-- fix_sql=""
+-- case $oldpgversion in
+-- 804??)
+-- fix_sql="UPDATE pg_proc SET probin = replace(probin::text, '$oldsrc', '$newsrc')::bytea WHERE probin LIKE '$oldsrc%';"
+-- ;;
+-- *)
+-- fix_sql="UPDATE pg_proc SET probin = replace(probin, '$oldsrc', '$newsrc') WHERE probin LIKE '$oldsrc%';"
+-- ;;
+-- esac
+-- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
+--
+-- mv "$temp_root"/dump1.sql "$temp_root"/dump1.sql.orig
+-- sed "s;$oldsrc;$newsrc;g" "$temp_root"/dump1.sql.orig >"$temp_root"/dump1.sql
+-- fi
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 74c29229ac..a3df427f1d 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -178,101 +178,7 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
# before dumping, get rid of objects not feasible in later versions
if [ "$newsrc" != "$oldsrc" ]; then
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="DROP FUNCTION public.myfunc(integer);"
- ;;
- esac
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text);" # last in 9.6 -- commit 5ded4bd21
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.putenv(text);" # last in v13
- # last in v13 commit 76f412ab3
- # public.!=- This one is only needed for v11+ ??
- # Note, until v10, operators could only be dropped one at a time
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.#@# (pg_catalog.int8, NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.#%# (pg_catalog.int8, NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.!=- (pg_catalog.int8, NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.#@%# (pg_catalog.int8, NONE);"
-
- # commit 068503c76511cdb0080bab689662a20e86b9c845
- case $oldpgversion in
- 10????)
- fix_sql="$fix_sql
- DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
- ;;
- esac
-
- # commit db3af9feb19f39827e916145f88fa5eca3130cb2
- case $oldpgversion in
- 10????)
- fix_sql="$fix_sql
- DROP FUNCTION boxarea(box);"
- fix_sql="$fix_sql
- DROP FUNCTION funny_dup17();"
- ;;
- esac
-
- # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
- case $oldpgversion in
- 10????)
- fix_sql="$fix_sql
- DROP TABLE abstime_tbl;"
- fix_sql="$fix_sql
- DROP TABLE reltime_tbl;"
- fix_sql="$fix_sql
- DROP TABLE tinterval_tbl;"
- ;;
- esac
-
- # Various things removed for v14
- case $oldpgversion in
- 906??|10????|11????|12????|13????)
- fix_sql="$fix_sql
- DROP AGGREGATE first_el_agg_any(anyelement);"
- ;;
- esac
- case $oldpgversion in
- 90[56]??|10????|11????|12????|13????)
- # commit 9e38c2bb5 and 97f73a978
- # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
- fix_sql="$fix_sql
- DROP AGGREGATE array_cat_accum(anyarray);"
-
- # commit 76f412ab3
- #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR @#@(NONE,bigint);"
- ;;
- esac
-
- # commit 578b22971: OIDS removed in v12
- case $oldpgversion in
- 804??|9????|10????|11????)
- fix_sql="$fix_sql
- ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
- fix_sql="$fix_sql
- ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
- #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
- fix_sql="$fix_sql
- ALTER TABLE public.emp SET WITHOUT OIDS;"
- fix_sql="$fix_sql
- ALTER TABLE public.tt7 SET WITHOUT OIDS;"
- ;;
- esac
-
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
+ psql -X -d regression -f "test-upgrade.sql" || psql_fix_sql_status=$?
fi
echo "fix_sql: $oldpgversion: $fix_sql" >&2
--
2.17.0
On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:
On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:
v4-0001 mostly teaches test.sh about specific changes that have to be
made to historic versions of the regression database to allow them
to be reloaded into current servers. As already discussed, this is
really duplicative of knowledge that's been embedded into the buildfarm
client over time. It'd be better if we could refactor that so that
the buildfarm shares a common database of these actions with test.sh.
And said database ought to be in our git tree, so committers could
fix problems without having to get Andrew involved every time.
I think this could be represented as a psql script, at least in
versions that have psql \if (but that came in in v10, so maybe
we're there already).I started this. I don't know if it's compatible with the buildfarm client, but
I think any issues maybe can be avoided by using "IF EXISTS".
I'm going to try pulling this into a psql script today and see how far
I get.
But I'm not sure I believe
that query. It's got hard-wired assumptions about which typtype values
need to be covered. Why is it okay to exclude range and multirange?
Are we sure that all composites are okay to exclude? Likewise, the
restriction to pg_catalog and information_schema schemas seems likely to
bite us someday. There are some very random exclusions based on name
patterns, which seem unsafe (let's list the specific type OIDs), and
again the nearby comments don't match the code. But the biggest issue
is that this can only cover core datatypes, not any contrib stuff.I changed to use regtype/OIDs, included range/multirange and stopped including
only pg_catalog/information_schema. But didn't yet handle composites.
Per cfbot, this test needs to be taught about the new
pg_brin_bloom_summary and pg_brin_minmax_multi_summary types.
--Jacob
On Fri, 2021-07-16 at 16:21 +0000, Jacob Champion wrote:
On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:
On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:
v4-0001 mostly teaches test.sh about specific changes that have to be
made to historic versions of the regression database to allow them
to be reloaded into current servers. As already discussed, this is
really duplicative of knowledge that's been embedded into the buildfarm
client over time. It'd be better if we could refactor that so that
the buildfarm shares a common database of these actions with test.sh.
And said database ought to be in our git tree, so committers could
fix problems without having to get Andrew involved every time.
I think this could be represented as a psql script, at least in
versions that have psql \if (but that came in in v10, so maybe
we're there already).I started this. I don't know if it's compatible with the buildfarm client, but
I think any issues maybe can be avoided by using "IF EXISTS".I'm going to try pulling this into a psql script today and see how far
I get.
I completely misread this exchange -- you already did this in 0004.
Sorry for the noise.
--Jacob
On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:
On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:
v4-0001 mostly teaches test.sh about specific changes that have to be
made to historic versions of the regression database to allow them
to be reloaded into current servers. As already discussed, this is
really duplicative of knowledge that's been embedded into the buildfarm
client over time. It'd be better if we could refactor that so that
the buildfarm shares a common database of these actions with test.sh.
And said database ought to be in our git tree, so committers could
fix problems without having to get Andrew involved every time.
I think this could be represented as a psql script, at least in
versions that have psql \if (but that came in in v10, so maybe
we're there already).I started this. I don't know if it's compatible with the buildfarm client, but
I think any issues maybe can be avoided by using "IF EXISTS".
Here are the differences I see on a first pass (without putting too
much thought into how significant the differences are). Buildfarm code
I'm comparing against is at [1]https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgradeXversion.pm.
- Both versions drop @#@ and array_cat_accum, but the buildfarm
additionally replaces them with a new operator and aggregate,
respectively.
- The buildfarm's dropping of table OIDs is probably more resilient,
since it loops over pg_class looking for relhasoids.
- The buildfarm handles (or drops) several contrib databases in
addition to the core regression DB.
- The psql script drops the first_el_agg_any aggregate and a `TRANSFORM
FOR integer`; I don't see any corresponding code in the buildfarm.
- Some version ranges are different between the two. For example,
abstime_/reltime_/tinterval_tbl are dropped by the buildfarm if the old
version is < 9.3, while the psql script drops them for old versions <=
10.
- The buildfarm drops the public.=> operator for much older versions of
Postgres. I assume we don't need that here.
- The buildfarm adjusts pg_proc for the location of regress.so; I see
there's a commented placeholder for this at the end of the psql script
but it's not yet implemented.
As an aside, I think the "fromv10" naming scheme for the "old version
<= 10" condition is unintuitive. If the old version is e.g. 9.6, we're
not upgrading "from 10".
--Jacob
[1]: https://github.com/PGBuildFarm/client-code/blob/main/PGBuild/Modules/TestUpgradeXversion.pm
Jacob Champion <pchampion@vmware.com> writes:
On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:
I started this. I don't know if it's compatible with the buildfarm client, but
I think any issues maybe can be avoided by using "IF EXISTS".
Here are the differences I see on a first pass (without putting too
much thought into how significant the differences are). Buildfarm code
I'm comparing against is at [1].
I switched the CF entry for this to "Waiting on Author". It's
been failing in the cfbot for a couple of months, and Jacob's
provided some review-ish comments here, so I think there's
plenty of reason to deem the ball to be in Justin's court.
regards, tom lane
On Fri, Jul 16, 2021 at 06:02:18PM +0000, Jacob Champion wrote:
On Fri, 2021-04-30 at 13:33 -0500, Justin Pryzby wrote:
On Sat, Mar 06, 2021 at 03:01:43PM -0500, Tom Lane wrote:
v4-0001 mostly teaches test.sh about specific changes that have to be
made to historic versions of the regression database to allow them
to be reloaded into current servers. As already discussed, this is
really duplicative of knowledge that's been embedded into the buildfarm
client over time. It'd be better if we could refactor that so that
the buildfarm shares a common database of these actions with test.sh.
And said database ought to be in our git tree, so committers could
fix problems without having to get Andrew involved every time.
I think this could be represented as a psql script, at least in
versions that have psql \if (but that came in in v10, so maybe
we're there already).I started this. I don't know if it's compatible with the buildfarm client, but
I think any issues maybe can be avoided by using "IF EXISTS".Here are the differences I see on a first pass (without putting too
much thought into how significant the differences are). Buildfarm code
I'm comparing against is at [1].- Both versions drop @#@ and array_cat_accum, but the buildfarm
additionally replaces them with a new operator and aggregate,
respectively.- The buildfarm's dropping of table OIDs is probably more resilient,
since it loops over pg_class looking for relhasoids.
These are all "translated" from test.sh, so follow its logic.
Maybe it should be improved, but that's separate from this patch - which is
already doing a few unrelated things.
- The buildfarm adjusts pg_proc for the location of regress.so; I see
there's a commented placeholder for this at the end of the psql script
but it's not yet implemented.
I didn't understand why this was done here, but it turns out it has to be done
*after* calling pg_dump. So it has to stay where it is.
- Some version ranges are different between the two. For example,
abstime_/reltime_/tinterval_tbl are dropped by the buildfarm if the old
version is < 9.3, while the psql script drops them for old versions <=
10.
This was an error. Thanks.
- The buildfarm drops the public.=> operator for much older versions of
Postgres. I assume we don't need that here.
As an aside, I think the "fromv10" naming scheme for the "old version
<= 10" condition is unintuitive. If the old version is e.g. 9.6, we're
not upgrading "from 10".
I renamed the version vars - feel free to suggest something better.
I'll solicit suggestions what else to do to progresss these.
@Andrew: did you have any comment on this part ?
|Subject: buildfarm xversion diff
|Forking /messages/by-id/20210328231433.GI15100@telsasoft.com
|
|I gave suggestion how to reduce the "lines of diff" metric almost to nothing,
|allowing a very small "fudge factor", and which I think makes this a pretty
|good metric rather than a passable one.
--
Justin
Attachments:
v5-0001-WIP-pg_upgrade-test.sh-changes-needed-to-allow-te.patchtext/x-diff; charset=us-asciiDownload
From 01debb48fe424e94cd2533d062c828ee6308d3c9 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 22:31:19 -0600
Subject: [PATCH v5 1/4] WIP: pg_upgrade/test.sh: changes needed to allow
testing upgrade to v14dev from v9.5-v13
test like:
time make -C src/bin/pg_upgrade check oldsrc=`pwd`/11 oldbindir=`pwd`/11/tmp_install/usr/local/pgsql/bin
---
src/bin/pg_upgrade/test.sh | 93 +++++++++++++++++++++++++++++++++++---
1 file changed, 86 insertions(+), 7 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 1ba326decd..9288cfdda8 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -176,18 +176,97 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
esac
fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text); -- last in 9.6
+ public.oldstyle_length(integer, text);" # last in 9.6 -- commit 5ded4bd21
+ fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
- public.putenv(text); -- last in v13
- DROP OPERATOR IF EXISTS -- last in v13
- public.#@# (pg_catalog.int8, NONE),
- public.#%# (pg_catalog.int8, NONE),
- public.!=- (pg_catalog.int8, NONE),
+ public.putenv(text);" # last in v13
+ # last in v13 commit 76f412ab3
+ # public.!=- This one is only needed for v11+ ??
+ # Note, until v10, operators could only be dropped one at a time
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.#@# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.#%# (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
+ public.!=- (pg_catalog.int8, NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR IF EXISTS
public.#@%# (pg_catalog.int8, NONE);"
+
+ # commit 068503c76511cdb0080bab689662a20e86b9c845
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
+ ;;
+ esac
+
+ # commit db3af9feb19f39827e916145f88fa5eca3130cb2
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP FUNCTION boxarea(box);"
+ fix_sql="$fix_sql
+ DROP FUNCTION funny_dup17();"
+ ;;
+ esac
+
+ # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+ case $oldpgversion in
+ 10????)
+ fix_sql="$fix_sql
+ DROP TABLE abstime_tbl;"
+ fix_sql="$fix_sql
+ DROP TABLE reltime_tbl;"
+ fix_sql="$fix_sql
+ DROP TABLE tinterval_tbl;"
+ ;;
+ esac
+
+ # Various things removed for v14
+ case $oldpgversion in
+ 906??|10????|11????|12????|13????)
+ fix_sql="$fix_sql
+ DROP AGGREGATE first_el_agg_any(anyelement);"
+ ;;
+ esac
+ case $oldpgversion in
+ 90[56]??|10????|11????|12????|13????)
+ # commit 9e38c2bb5 and 97f73a978
+ # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
+ fix_sql="$fix_sql
+ DROP AGGREGATE array_cat_accum(anyarray);"
+
+ # commit 76f412ab3
+ #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
+ fix_sql="$fix_sql
+ DROP OPERATOR @#@(NONE,bigint);"
+ ;;
+ esac
+
+ # commit 578b22971: OIDS removed in v12
+ case $oldpgversion in
+ 804??|9????|10????|11????)
+ fix_sql="$fix_sql
+ ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ fix_sql="$fix_sql
+ ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
+ #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ fix_sql="$fix_sql
+ ALTER TABLE public.emp SET WITHOUT OIDS;"
+ fix_sql="$fix_sql
+ ALTER TABLE public.tt7 SET WITHOUT OIDS;"
+ ;;
+ esac
+
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ echo "fix_sql: $oldpgversion: $fix_sql" >&2
+ pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
--
2.17.0
v5-0002-More-changes-needed-to-allow-upgrade-testing.patchtext/x-diff; charset=us-asciiDownload
From 8f39f5b66bae4994e022008a90356b3a36466833 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Mon, 11 Jan 2021 21:41:16 -0600
Subject: [PATCH v5 2/4] More changes needed to allow upgrade testing:
These all seem to complicate use of pg_upgrade/test.sh:
e78900afd217fa3eaa77c51e23a94c1466af421c Create by default sql/ and expected/ for output directory in pg_regress
40b132c1afbb4b1494aa8e48cc35ec98d2b90777 In the pg_upgrade test suite, don't write to src/test/regress.
fc49e24fa69a15efacd5b8958115ed9c43c48f9a Make WAL segment size configurable at initdb time.
c37b3d08ca6873f9d4eaf24c72a90a550970cbb8 Allow group access on PGDATA
da9b580d89903fee871cf54845ffa2b26bda2e11 Refactor dir/file permissions
---
src/bin/pg_upgrade/test.sh | 27 +++++++++++++++++++++------
1 file changed, 21 insertions(+), 6 deletions(-)
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 9288cfdda8..2bdd8c19de 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,13 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ if [ -z "$oldsrc" ]
+ then
+ "$1" -N -A trust --wal-segsize 1 -g
+ else
+ "$1" -N -A trust
+ fi
+
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -106,6 +112,9 @@ outputdir="$temp_root/regress"
EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --outputdir=$outputdir"
export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
+mkdir "$outputdir"/testtablespace
+mkdir "$outputdir"/sql
+mkdir "$outputdir"/expected
logdir=`pwd`/log
rm -rf "$logdir"
@@ -312,23 +321,29 @@ pg_upgrade $PG_UPGRADE_OPTS -d "${PGDATA}.old" -D "$PGDATA" -b "$oldbindir" -p "
# Windows hosts don't support Unix-y permissions.
case $testhost in
MINGW*|CYGWIN*) ;;
- *) if [ `find "$PGDATA" -type f ! -perm 640 | wc -l` -ne 0 ]; then
- echo "files in PGDATA with permission != 640";
+ *)
+ x=`find "$PGDATA" -type f ! -perm 600 ! -perm 640 -ls`
+ if [ -n "$x" ]; then
+ echo "files in PGDATA with permission NOT IN (600, 640)";
+ echo "$x" |head
exit 1;
fi ;;
esac
case $testhost in
MINGW*|CYGWIN*) ;;
- *) if [ `find "$PGDATA" -type d ! -perm 750 | wc -l` -ne 0 ]; then
- echo "directories in PGDATA with permission != 750";
+ *)
+ x=`find "$PGDATA" -type d ! -perm 700 ! -perm 750 -ls`
+ if [ "$x" ]; then
+ echo "directories in PGDATA with permission NOT IN (700, 750)";
+ echo "$x" |head
exit 1;
fi ;;
esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
--
2.17.0
v5-0004-Move-pg_upgrade-kludges-to-sql-script.patchtext/x-diff; charset=us-asciiDownload
From 9e710615e610f055de9ae4675704fd7429dd8155 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 6 Mar 2021 18:35:26 -0600
Subject: [PATCH v5 4/4] Move pg_upgrade kludges to sql script
---
src/bin/pg_upgrade/test-upgrade.sql | 70 +++++++++++++++++++++
src/bin/pg_upgrade/test.sh | 97 +----------------------------
2 files changed, 71 insertions(+), 96 deletions(-)
create mode 100644 src/bin/pg_upgrade/test-upgrade.sql
diff --git a/src/bin/pg_upgrade/test-upgrade.sql b/src/bin/pg_upgrade/test-upgrade.sql
new file mode 100644
index 0000000000..8c7cceb211
--- /dev/null
+++ b/src/bin/pg_upgrade/test-upgrade.sql
@@ -0,0 +1,70 @@
+-- This file has a bunch of kludges needed for testing upgrades across major versions
+
+SELECT
+ ver >= 804 AND ver <= 1100 AS oldpgversion_84_11,
+ ver >= 905 AND ver <= 1300 AS oldpgversion_95_13,
+ ver >= 906 AND ver <= 1300 AS oldpgversion_96_13,
+ ver >= 906 AND ver <= 1000 AS oldpgversion_96_10,
+ ver >= 1000 AS oldpgversion_ge10,
+ ver <= 804 AS oldpgversion_le84,
+ ver <= 1300 AS oldpgversion_le13
+ FROM (SELECT current_setting('server_version_num')::int/100 AS ver) AS v;
+\gset
+
+\if :oldpgversion_le84
+DROP FUNCTION public.myfunc(integer);
+\endif
+
+-- last in 9.6 -- commit 5ded4bd21
+DROP FUNCTION IF EXISTS public.oldstyle_length(integer, text);
+DROP FUNCTION IF EXISTS public.putenv(text);
+
+\if :oldpgversion_le13
+-- last in v13 commit 76f412ab3
+-- public.!=- This one is only needed for v11+ ??
+-- Note, until v10, operators could only be dropped one at a time
+DROP OPERATOR IF EXISTS public.#@# (pg_catalog.int8, NONE);
+DROP OPERATOR IF EXISTS public.#%# (pg_catalog.int8, NONE);
+DROP OPERATOR IF EXISTS public.!=- (pg_catalog.int8, NONE);
+DROP OPERATOR IF EXISTS public.#@%# (pg_catalog.int8, NONE);
+\endif
+
+\if :oldpgversion_ge10
+-- commit 068503c76511cdb0080bab689662a20e86b9c845
+DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;
+\endif
+
+\if :oldpgversion_96_10
+-- commit db3af9feb19f39827e916145f88fa5eca3130cb2
+DROP FUNCTION boxarea(box);
+DROP FUNCTION funny_dup17();
+
+-- commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+DROP TABLE abstime_tbl;
+DROP TABLE reltime_tbl;
+DROP TABLE tinterval_tbl;
+\endif
+
+\if :oldpgversion_96_13
+-- Various things removed for v14
+DROP AGGREGATE first_el_agg_any(anyelement);
+\endif
+
+\if :oldpgversion_95_13
+-- commit 9e38c2bb5 and 97f73a978
+-- DROP AGGREGATE array_larger_accum(anyarray);
+DROP AGGREGATE array_cat_accum(anyarray);
+
+-- commit 76f412ab3
+-- DROP OPERATOR @#@(bigint,NONE);
+DROP OPERATOR @#@(NONE,bigint);
+\endif
+
+\if :oldpgversion_84_11
+-- commit 578b22971: OIDS removed in v12
+ALTER TABLE public.tenk1 SET WITHOUT OIDS;
+ALTER TABLE public.tenk1 SET WITHOUT OIDS;
+-- fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
+ALTER TABLE public.emp SET WITHOUT OIDS;
+ALTER TABLE public.tt7 SET WITHOUT OIDS;
+\endif
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 2bdd8c19de..61bcca3673 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -177,104 +177,9 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
# before dumping, get rid of objects not feasible in later versions
if [ "$newsrc" != "$oldsrc" ]; then
- fix_sql=""
- case $oldpgversion in
- 804??)
- fix_sql="DROP FUNCTION public.myfunc(integer);"
- ;;
- esac
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text);" # last in 9.6 -- commit 5ded4bd21
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.putenv(text);" # last in v13
- # last in v13 commit 76f412ab3
- # public.!=- This one is only needed for v11+ ??
- # Note, until v10, operators could only be dropped one at a time
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.#@# (pg_catalog.int8, NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.#%# (pg_catalog.int8, NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.!=- (pg_catalog.int8, NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR IF EXISTS
- public.#@%# (pg_catalog.int8, NONE);"
-
- # commit 068503c76511cdb0080bab689662a20e86b9c845
- case $oldpgversion in
- 10????)
- fix_sql="$fix_sql
- DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
- ;;
- esac
-
- # commit db3af9feb19f39827e916145f88fa5eca3130cb2
- case $oldpgversion in
- 10????)
- fix_sql="$fix_sql
- DROP FUNCTION boxarea(box);"
- fix_sql="$fix_sql
- DROP FUNCTION funny_dup17();"
- ;;
- esac
-
- # commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
- case $oldpgversion in
- 10????)
- fix_sql="$fix_sql
- DROP TABLE abstime_tbl;"
- fix_sql="$fix_sql
- DROP TABLE reltime_tbl;"
- fix_sql="$fix_sql
- DROP TABLE tinterval_tbl;"
- ;;
- esac
-
- # Various things removed for v14
- case $oldpgversion in
- 906??|10????|11????|12????|13????)
- fix_sql="$fix_sql
- DROP AGGREGATE first_el_agg_any(anyelement);"
- ;;
- esac
- case $oldpgversion in
- 90[56]??|10????|11????|12????|13????)
- # commit 9e38c2bb5 and 97f73a978
- # fix_sql="$fix_sql DROP AGGREGATE array_larger_accum(anyarray);"
- fix_sql="$fix_sql
- DROP AGGREGATE array_cat_accum(anyarray);"
-
- # commit 76f412ab3
- #fix_sql="$fix_sql DROP OPERATOR @#@(bigint,NONE);"
- fix_sql="$fix_sql
- DROP OPERATOR @#@(NONE,bigint);"
- ;;
- esac
-
- # commit 578b22971: OIDS removed in v12
- case $oldpgversion in
- 804??|9????|10????|11????)
- fix_sql="$fix_sql
- ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
- fix_sql="$fix_sql
- ALTER TABLE public.tenk1 SET WITHOUT OIDS;"
- #fix_sql="$fix_sql ALTER TABLE public.stud_emp SET WITHOUT OIDS;" # inherited
- fix_sql="$fix_sql
- ALTER TABLE public.emp SET WITHOUT OIDS;"
- fix_sql="$fix_sql
- ALTER TABLE public.tt7 SET WITHOUT OIDS;"
- ;;
- esac
-
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
+ psql -X -d regression -f "test-upgrade.sql" || psql_fix_sql_status=$?
fi
- echo "fix_sql: $oldpgversion: $fix_sql" >&2
pg_dumpall --extra-float-digits=0 --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
--
2.17.0
v5-0003-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 8a7a2ac9aa73f06a2b3d413fc3712677012ae88a Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH v5 3/4] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes to notice if the
binary format is accidentally changed again, as happened at:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
I checked that if I cherry-pick to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 55 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 54 +++++++++++++++++++++
3 files changed, 110 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index 982b6aff53..551f35d59f 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index f567fd378e..58013a8df3 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -674,3 +674,58 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
----------+------------+---------------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'foo'::"char", 'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type, 'pg_monitor'::regrole,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, '16/B374D848'::pg_lsn,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no,
+'venus'::planets, 'i16'::insenum,
+'(1,2)'::int4range, '{(1,2)}'::int4multirange,
+'(3,4)'::int8range, '{(3,4)}'::int8multirange,
+'(1,2)'::float8range, '{(1,2)}'::float8multirange,
+'(3,4)'::numrange, '{(3,4)}'::nummultirange,
+'(a,b)'::textrange, '{(a,b)}'::textmultirange,
+'(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+'(2020-01-02, 2021-02-03)'::daterange,
+'{(2020-01-02, 2021-02-03)}'::datemultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+arrayrange(ARRAY[1,2], ARRAY[2,1]),
+arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list', 'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
+ oid | typname | typtype | typelem | typarray | typarray
+-----+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 404c3a2043..e98191f01f 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -495,3 +495,57 @@ WHERE pronargs != 2
SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid
FROM pg_range p1
WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'foo'::"char", 'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type, 'pg_monitor'::regrole,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, '16/B374D848'::pg_lsn,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no,
+'venus'::planets, 'i16'::insenum,
+'(1,2)'::int4range, '{(1,2)}'::int4multirange,
+'(3,4)'::int8range, '{(3,4)}'::int8multirange,
+'(1,2)'::float8range, '{(1,2)}'::float8multirange,
+'(3,4)'::numrange, '{(3,4)}'::nummultirange,
+'(a,b)'::textrange, '{(a,b)}'::textmultirange,
+'(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+'(2020-01-02, 2021-02-03)'::daterange,
+'{(2020-01-02, 2021-02-03)}'::datemultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+arrayrange(ARRAY[1,2], ARRAY[2,1]),
+arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list', 'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
--
2.17.0
On 9/11/21 8:51 PM, Justin Pryzby wrote:
@Andrew: did you have any comment on this part ?
|Subject: buildfarm xversion diff
|Forking /messages/by-id/20210328231433.GI15100@telsasoft.com
|
|I gave suggestion how to reduce the "lines of diff" metric almost to nothing,
|allowing a very small "fudge factor", and which I think makes this a pretty
|good metric rather than a passable one.
Somehow I missed that. Looks like some good suggestions. I'll
experiment. (Note: we can't assume the presence of sed, especially on
Windows).
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 9/12/21 2:41 PM, Andrew Dunstan wrote:
On 9/11/21 8:51 PM, Justin Pryzby wrote:
@Andrew: did you have any comment on this part ?
|Subject: buildfarm xversion diff
|Forking /messages/by-id/20210328231433.GI15100@telsasoft.com
|
|I gave suggestion how to reduce the "lines of diff" metric almost to nothing,
|allowing a very small "fudge factor", and which I think makes this a pretty
|good metric rather than a passable one.Somehow I missed that. Looks like some good suggestions. I'll
experiment. (Note: we can't assume the presence of sed, especially on
Windows).
I tried with the attached patch on crake, which tests back as far as
9.2. Here are the diff counts from HEAD:
andrew@emma:HEAD $ grep -c '^[+-]' dumpdiff-REL9_* dumpdiff-REL_1*
dumpdiff-HEAD
dumpdiff-REL9_2_STABLE:514
dumpdiff-REL9_3_STABLE:169
dumpdiff-REL9_4_STABLE:185
dumpdiff-REL9_5_STABLE:221
dumpdiff-REL9_6_STABLE:11
dumpdiff-REL_10_STABLE:11
dumpdiff-REL_11_STABLE:73
dumpdiff-REL_12_STABLE:73
dumpdiff-REL_13_STABLE:73
dumpdiff-REL_14_STABLE:0
dumpdiff-HEAD:0
I've also attached those non-empty dumpdiff files for information, since
they are quite small.
There is still work to do, but this is promising. Next step: try it on
Windows.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
Attachments:
dumpdiff-REL9_4_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL9_4_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL9_4_STABLE.sql.fixed 2021-09-12 16:03:15.842096983 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL9_4_STABLE-to-HEAD.sql.fixed 2021-09-12 16:03:15.861096984 -0400
@@ -62,9 +62,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -1616,9 +1616,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -8749,9 +8749,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -9592,9 +9592,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -28059,9 +28059,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -28401,9 +28401,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31641,9 +31641,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31746,9 +31744,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31851,9 +31847,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31963,9 +31959,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32054,9 +32048,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32145,9 +32137,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -33357,9 +33349,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -33448,9 +33438,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35613,9 +35603,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36776,9 +36766,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36937,9 +36927,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -37079,9 +37069,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -37257,9 +37247,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -38171,9 +38159,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -40882,9 +40870,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41202,9 +41190,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41320,9 +41306,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41425,9 +41409,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41530,9 +41512,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41703,9 +41683,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41794,9 +41772,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43238,9 +43216,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -44121,9 +44099,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -45078,9 +45056,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -45125,9 +45101,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -45248,7 +45222,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -45271,7 +45246,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -45358,6 +45334,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -45405,6 +45382,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -45874,6 +45852,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -50638,7 +50619,7 @@
x integer,
y text,
z integer,
- CONSTRAINT sequence_con CHECK ((((x > 3) AND (y <> 'check failed'::text)) AND (z < 8)))
+ CONSTRAINT sequence_con CHECK (((x > 3) AND (y <> 'check failed'::text) AND (z < 8)))
);
@@ -50781,7 +50762,7 @@
x integer,
y text,
z integer,
- CONSTRAINT copy_con CHECK ((((x > 3) AND (y <> 'check failed'::text)) AND (x < 7)))
+ CONSTRAINT copy_con CHECK (((x > 3) AND (y <> 'check failed'::text) AND (x < 7)))
);
@@ -51377,8 +51358,8 @@
CREATE TABLE public.insert_tbl (
x integer DEFAULT nextval('public.insert_seq'::regclass),
y text DEFAULT '-NULL-'::text,
- z integer DEFAULT ((-1) * currval('public.insert_seq'::regclass)),
- CONSTRAINT insert_con CHECK ((((x >= 3) AND (y <> 'check failed'::text)) AND (x < 8))),
+ z integer DEFAULT ('-1'::integer * currval('public.insert_seq'::regclass)),
+ CONSTRAINT insert_con CHECK (((x >= 3) AND (y <> 'check failed'::text) AND (x < 8))),
CONSTRAINT insert_tbl_check CHECK (((x + z) = 0))
);
@@ -52799,7 +52780,7 @@
int4smaller(rsh.sh_avail, rsl.sl_avail) AS total_avail
FROM public.shoe rsh,
public.shoelace rsl
- WHERE (((rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm)) AND (rsl.sl_len_cm <= rsh.slmaxlen_cm));
+ WHERE ((rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm) AND (rsl.sl_len_cm <= rsh.slmaxlen_cm));
ALTER TABLE public.shoe_ready OWNER TO buildfarm;
@@ -223090,9 +223071,9 @@
--
COPY public.test_tsquery (txtkeyword, txtsample, keyword, sample) FROM stdin;
-'New York' new & york | big & apple | nyc 'new' & 'york' ( 'new' & 'york' | 'big' & 'appl' ) | 'nyc'
+'New York' new & york | big & apple | nyc 'new' & 'york' 'new' & 'york' | 'big' & 'appl' | 'nyc'
Moscow moskva | moscow 'moscow' 'moskva' | 'moscow'
-'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' ( 'peterburg' | 'peter' ) | 'sanct' & 'peterburg'
+'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' 'peterburg' | 'peter' | 'sanct' & 'peterburg'
'foo bar qq' foo & (bar | qq) & city 'foo' & 'bar' & 'qq' 'foo' & ( 'bar' | 'qq' ) & 'citi'
\.
@@ -226085,8 +226066,8 @@
ON INSERT TO public.rule_and_refint_t3
WHERE (EXISTS ( SELECT 1
FROM public.rule_and_refint_t3 rule_and_refint_t3_1
- WHERE (((rule_and_refint_t3_1.id3a = new.id3a) AND (rule_and_refint_t3_1.id3b = new.id3b)) AND (rule_and_refint_t3_1.id3c = new.id3c)))) DO INSTEAD UPDATE public.rule_and_refint_t3 SET data = new.data
- WHERE (((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b)) AND (rule_and_refint_t3.id3c = new.id3c));
+ WHERE ((rule_and_refint_t3_1.id3a = new.id3a) AND (rule_and_refint_t3_1.id3b = new.id3b) AND (rule_and_refint_t3_1.id3c = new.id3c)))) DO INSTEAD UPDATE public.rule_and_refint_t3 SET data = new.data
+ WHERE ((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b) AND (rule_and_refint_t3.id3c = new.id3c));
--
@@ -226488,22 +226469,10 @@
--
--- Name: DATABASE regression; Type: ACL; Schema: -; Owner: buildfarm
---
-
-REVOKE ALL ON DATABASE regression FROM PUBLIC;
-REVOKE ALL ON DATABASE regression FROM buildfarm;
-GRANT ALL ON DATABASE regression TO buildfarm;
-GRANT CONNECT,TEMPORARY ON DATABASE regression TO PUBLIC;
-
-
---
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -226511,9 +226480,6 @@
-- Name: TABLE my_credit_card_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_normal TO PUBLIC;
@@ -226521,9 +226487,6 @@
-- Name: TABLE my_credit_card_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_secure TO PUBLIC;
@@ -226531,9 +226494,6 @@
-- Name: TABLE my_credit_card_usage_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_normal TO PUBLIC;
@@ -226541,9 +226501,6 @@
-- Name: TABLE my_credit_card_usage_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_secure TO PUBLIC;
@@ -226551,9 +226508,6 @@
-- Name: TABLE my_property_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_property_normal TO PUBLIC;
@@ -226561,9 +226515,6 @@
-- Name: TABLE my_property_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_property_secure TO PUBLIC;
dumpdiff-REL9_5_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL9_5_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL9_5_STABLE.sql.fixed 2021-09-12 16:04:40.281098310 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL9_5_STABLE-to-HEAD.sql.fixed 2021-09-12 16:04:40.306098310 -0400
@@ -66,9 +66,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -7199,9 +7199,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -8042,9 +8042,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -26509,9 +26509,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -26851,9 +26851,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -26949,9 +26947,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30182,9 +30180,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30416,13 +30414,25 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
--
+-- Name: FUNCTION dblink_connect_u(text); Type: ACL; Schema: public; Owner: buildfarm
+--
+
+REVOKE ALL ON FUNCTION public.dblink_connect_u(text) FROM PUBLIC;
+
+
+--
+-- Name: FUNCTION dblink_connect_u(text, text); Type: ACL; Schema: public; Owner: buildfarm
+--
+
+REVOKE ALL ON FUNCTION public.dblink_connect_u(text, text) FROM PUBLIC;
+
+
+--
-- PostgreSQL database dump complete
--
@@ -30521,9 +30531,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30626,9 +30634,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30870,9 +30878,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30982,9 +30990,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31073,9 +31079,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31164,9 +31168,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32376,9 +32380,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32467,9 +32469,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32770,9 +32772,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32861,9 +32861,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35026,9 +35026,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35228,9 +35226,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36391,9 +36389,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36552,9 +36550,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36694,9 +36692,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36872,9 +36870,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39062,9 +39060,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39976,9 +39972,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -42687,9 +42683,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43007,9 +43003,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43125,9 +43119,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43216,9 +43208,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43321,9 +43311,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43490,9 +43480,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43660,9 +43650,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43765,9 +43753,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -43938,9 +43924,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -44029,9 +44013,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -45473,9 +45457,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -46368,9 +46352,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -47325,9 +47309,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -47372,9 +47354,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -47511,7 +47491,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -47534,7 +47515,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -47621,6 +47603,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -47668,6 +47651,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -48194,6 +48178,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -249867,9 +249854,9 @@
--
COPY public.test_tsquery (txtkeyword, txtsample, keyword, sample) FROM stdin;
-'New York' new & york | big & apple | nyc 'new' & 'york' ( 'new' & 'york' | 'big' & 'appl' ) | 'nyc'
+'New York' new & york | big & apple | nyc 'new' & 'york' 'new' & 'york' | 'big' & 'appl' | 'nyc'
Moscow moskva | moscow 'moscow' 'moskva' | 'moscow'
-'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' ( 'peterburg' | 'peter' ) | 'sanct' & 'peterburg'
+'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' 'peterburg' | 'peter' | 'sanct' & 'peterburg'
'foo bar qq' foo & (bar | qq) & city 'foo' & 'bar' & 'qq' 'foo' & ( 'bar' | 'qq' ) & 'citi'
\.
@@ -253535,22 +253522,10 @@
ALTER TABLE rls_regress_schema.rls_tbl_force ENABLE ROW LEVEL SECURITY;
--
--- Name: DATABASE regression; Type: ACL; Schema: -; Owner: buildfarm
---
-
-REVOKE ALL ON DATABASE regression FROM PUBLIC;
-REVOKE ALL ON DATABASE regression FROM buildfarm;
-GRANT ALL ON DATABASE regression TO buildfarm;
-GRANT CONNECT,TEMPORARY ON DATABASE regression TO PUBLIC;
-
-
---
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -253558,9 +253533,6 @@
-- Name: TABLE my_credit_card_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_normal TO PUBLIC;
@@ -253568,9 +253540,6 @@
-- Name: TABLE my_credit_card_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_secure TO PUBLIC;
@@ -253578,9 +253547,6 @@
-- Name: TABLE my_credit_card_usage_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_normal TO PUBLIC;
@@ -253588,9 +253554,6 @@
-- Name: TABLE my_credit_card_usage_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_secure TO PUBLIC;
@@ -253598,9 +253561,6 @@
-- Name: TABLE my_property_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_property_normal TO PUBLIC;
@@ -253608,9 +253568,6 @@
-- Name: TABLE my_property_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_property_secure TO PUBLIC;
upgrade-diffs.patchtext/x-patch; charset=UTF-8; name=upgrade-diffs.patchDownload
diff --git a/PGBuild/Modules/TestUpgradeXversion.pm b/PGBuild/Modules/TestUpgradeXversion.pm
index 79c24c4..8aed93c 100644
--- a/PGBuild/Modules/TestUpgradeXversion.pm
+++ b/PGBuild/Modules/TestUpgradeXversion.pm
@@ -689,9 +689,26 @@ sub test_upgrade ## no critic (Subroutines::ProhibitManyArgs)
return if $?;
}
- system( qq{diff -I "^-- " -u "$upgrade_loc/origin-$oversion.sql" }
- . qq{"$upgrade_loc/converted-$oversion-to-$this_branch.sql" }
- . qq{> "$upgrade_loc/dumpdiff-$oversion" 2>&1});
+ foreach my $dump ("$upgrade_loc/origin-$oversion.sql",
+ "$upgrade_loc/converted-$oversion-to-$this_branch.sql")
+ {
+ # would like to use lookbehind here but perl complains
+ # so do it this way
+ my $contents = file_contents($dump);
+ $contents =~ s/
+ (^CREATE\sTRIGGER\s.*?)
+ \sEXECUTE\sPROCEDURE
+ /$1 EXECUTE FUNCTION/mgx;
+ open(my $dh, '>', "$dump.fixed") || die "opening $dump.fixed";
+ print $dh $contents;
+ close($dh);
+ }
+
+ system( qq{diff -I "^\$" -I "SET default_table_access_method = heap;" }
+ . qq{ -I "^SET default_toast_compression = 'pglz';\$" -I "^-- " }
+ . qq{-u "$upgrade_loc/origin-$oversion.sql.fixed" }
+ . qq{"$upgrade_loc/converted-$oversion-to-$this_branch.sql.fixed" }
+ . qq{> "$upgrade_loc/dumpdiff-$oversion" 2>&1});
# diff exits with status 1 if files differ
return if $? >> 8 > 1;
@@ -699,7 +716,10 @@ sub test_upgrade ## no critic (Subroutines::ProhibitManyArgs)
open(my $diffile, '<', "$upgrade_loc/dumpdiff-$oversion")
|| die "opening $upgrade_loc/dumpdiff-$oversion: $!";
my $difflines = 0;
- $difflines++ while <$diffile>;
+ while (<$diffile>)
+ {
+ $difflines++ if /^[+-]/;
+ }
close($diffile);
# If the versions match we expect a possible handful of diffs,
dumpdiff-REL9_2_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL9_2_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL9_2_STABLE.sql.fixed 2021-09-12 16:01:18.046095133 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL9_2_STABLE-to-HEAD.sql.fixed 2021-09-12 16:01:18.064095134 -0400
@@ -62,9 +62,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -283,10 +283,6 @@
-- Name: DATABASE contrib_regression; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON DATABASE contrib_regression FROM PUBLIC;
-REVOKE ALL ON DATABASE contrib_regression FROM buildfarm;
-GRANT ALL ON DATABASE contrib_regression TO buildfarm;
-GRANT CONNECT,TEMPORARY ON DATABASE contrib_regression TO PUBLIC;
GRANT ALL ON DATABASE contrib_regression TO dblink_regression_test;
@@ -294,13 +290,25 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
--
+-- Name: FUNCTION dblink_connect_u(text); Type: ACL; Schema: public; Owner: buildfarm
+--
+
+REVOKE ALL ON FUNCTION public.dblink_connect_u(text) FROM PUBLIC;
+
+
+--
+-- Name: FUNCTION dblink_connect_u(text, text); Type: ACL; Schema: public; Owner: buildfarm
+--
+
+REVOKE ALL ON FUNCTION public.dblink_connect_u(text, text) FROM PUBLIC;
+
+
+--
-- PostgreSQL database dump complete
--
@@ -7427,9 +7437,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -8270,9 +8280,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -26737,9 +26747,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -26991,9 +27001,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30224,9 +30234,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30329,9 +30337,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30434,9 +30440,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30546,9 +30552,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30637,9 +30641,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -30728,9 +30730,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31918,9 +31920,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -32009,9 +32009,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -34174,9 +34174,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35335,9 +35335,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35496,9 +35496,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35630,9 +35630,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35808,9 +35808,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36374,9 +36372,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39085,9 +39083,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39405,9 +39403,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39523,9 +39519,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39628,9 +39622,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -39719,9 +39711,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -40548,7 +40540,9 @@
--
CREATE VIEW public.trigger_test_view AS
-SELECT trigger_test.i, trigger_test.v FROM public.trigger_test;
+ SELECT trigger_test.i,
+ trigger_test.v
+ FROM public.trigger_test;
ALTER TABLE public.trigger_test_view OWNER TO buildfarm;
@@ -40661,9 +40655,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -40708,9 +40700,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -40822,7 +40812,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -40845,7 +40836,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -40932,6 +40924,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -40979,6 +40972,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -41377,6 +41371,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -45617,7 +45614,7 @@
x integer,
y text,
z integer,
- CONSTRAINT sequence_con CHECK ((((x > 3) AND (y <> 'check failed'::text)) AND (z < 8)))
+ CONSTRAINT sequence_con CHECK (((x > 3) AND (y <> 'check failed'::text) AND (z < 8)))
);
@@ -45760,7 +45757,7 @@
x integer,
y text,
z integer,
- CONSTRAINT copy_con CHECK ((((x > 3) AND (y <> 'check failed'::text)) AND (x < 7)))
+ CONSTRAINT copy_con CHECK (((x > 3) AND (y <> 'check failed'::text) AND (x < 7)))
);
@@ -45975,7 +45972,8 @@
--
CREATE VIEW public.domview AS
-SELECT (domtab.col1)::public.dom AS col1 FROM public.domtab;
+ SELECT (domtab.col1)::public.dom AS col1
+ FROM public.domtab;
ALTER TABLE public.domview OWNER TO buildfarm;
@@ -46262,7 +46260,12 @@
--
CREATE VIEW public.iexit AS
-SELECT ih.name, ih.thepath, public.interpt_pp(ih.thepath, r.thepath) AS exit FROM public.ihighway ih, public.ramp r WHERE (ih.thepath OPERATOR(public.##) r.thepath);
+ SELECT ih.name,
+ ih.thepath,
+ public.interpt_pp(ih.thepath, r.thepath) AS exit
+ FROM public.ihighway ih,
+ public.ramp r
+ WHERE (ih.thepath OPERATOR(public.##) r.thepath);
ALTER TABLE public.iexit OWNER TO buildfarm;
@@ -46337,8 +46340,8 @@
CREATE TABLE public.insert_tbl (
x integer DEFAULT nextval('public.insert_seq'::regclass),
y text DEFAULT '-NULL-'::text,
- z integer DEFAULT ((-1) * currval('public.insert_seq'::regclass)),
- CONSTRAINT insert_con CHECK ((((x >= 3) AND (y <> 'check failed'::text)) AND (x < 8))),
+ z integer DEFAULT ('-1'::integer * currval('public.insert_seq'::regclass)),
+ CONSTRAINT insert_con CHECK (((x >= 3) AND (y <> 'check failed'::text) AND (x < 8))),
CONSTRAINT insert_tbl_check CHECK (((x + z) = 0))
);
@@ -46487,7 +46490,15 @@
--
CREATE VIEW public.my_credit_card_normal AS
-SELECT l.cid, l.name, l.tel, l.passwd, r.cnum, r.climit FROM (public.customer l NATURAL JOIN public.credit_card r) WHERE (l.name = ("current_user"())::text);
+ SELECT l.cid,
+ l.name,
+ l.tel,
+ l.passwd,
+ r.cnum,
+ r.climit
+ FROM (public.customer l
+ JOIN public.credit_card r USING (cid))
+ WHERE (l.name = ("current_user"())::text);
ALTER TABLE public.my_credit_card_normal OWNER TO buildfarm;
@@ -46497,7 +46508,15 @@
--
CREATE VIEW public.my_credit_card_secure WITH (security_barrier='true') AS
-SELECT l.cid, l.name, l.tel, l.passwd, r.cnum, r.climit FROM (public.customer l NATURAL JOIN public.credit_card r) WHERE (l.name = ("current_user"())::text);
+ SELECT l.cid,
+ l.name,
+ l.tel,
+ l.passwd,
+ r.cnum,
+ r.climit
+ FROM (public.customer l
+ JOIN public.credit_card r USING (cid))
+ WHERE (l.name = ("current_user"())::text);
ALTER TABLE public.my_credit_card_secure OWNER TO buildfarm;
@@ -46507,7 +46526,16 @@
--
CREATE VIEW public.my_credit_card_usage_normal AS
-SELECT l.cid, l.name, l.tel, l.passwd, l.cnum, l.climit, r.ymd, r.usage FROM (public.my_credit_card_secure l NATURAL JOIN public.credit_usage r);
+ SELECT l.cid,
+ l.name,
+ l.tel,
+ l.passwd,
+ l.cnum,
+ l.climit,
+ r.ymd,
+ r.usage
+ FROM (public.my_credit_card_secure l
+ JOIN public.credit_usage r USING (cid));
ALTER TABLE public.my_credit_card_usage_normal OWNER TO buildfarm;
@@ -46517,7 +46545,16 @@
--
CREATE VIEW public.my_credit_card_usage_secure WITH (security_barrier='true') AS
-SELECT l.cid, l.name, l.tel, l.passwd, l.cnum, l.climit, r.ymd, r.usage FROM (public.my_credit_card_secure l NATURAL JOIN public.credit_usage r);
+ SELECT l.cid,
+ l.name,
+ l.tel,
+ l.passwd,
+ l.cnum,
+ l.climit,
+ r.ymd,
+ r.usage
+ FROM (public.my_credit_card_secure l
+ JOIN public.credit_usage r USING (cid));
ALTER TABLE public.my_credit_card_usage_secure OWNER TO buildfarm;
@@ -46527,7 +46564,12 @@
--
CREATE VIEW public.my_property_normal WITH (security_barrier='true') AS
-SELECT customer.cid, customer.name, customer.tel, customer.passwd FROM public.customer WHERE (customer.name = ("current_user"())::text);
+ SELECT customer.cid,
+ customer.name,
+ customer.tel,
+ customer.passwd
+ FROM public.customer
+ WHERE (customer.name = ("current_user"())::text);
ALTER TABLE public.my_property_normal OWNER TO buildfarm;
@@ -46537,7 +46579,12 @@
--
CREATE VIEW public.my_property_secure WITH (security_barrier='false') AS
-SELECT customer.cid, customer.name, customer.tel, customer.passwd FROM public.customer WHERE (customer.name = ("current_user"())::text);
+ SELECT customer.cid,
+ customer.name,
+ customer.tel,
+ customer.passwd
+ FROM public.customer
+ WHERE (customer.name = ("current_user"())::text);
ALTER TABLE public.my_property_secure OWNER TO buildfarm;
@@ -46830,7 +46877,11 @@
--
CREATE VIEW public.pfield_v1 AS
-SELECT pf.pfname, pf.slotname, public.pslot_backlink_view(pf.slotname) AS backside, public.pslot_slotlink_view(pf.slotname) AS patch FROM public.pslot pf;
+ SELECT pf.pfname,
+ pf.slotname,
+ public.pslot_backlink_view(pf.slotname) AS backside,
+ public.pslot_slotlink_view(pf.slotname) AS patch
+ FROM public.pslot pf;
ALTER TABLE public.pfield_v1 OWNER TO buildfarm;
@@ -47301,7 +47352,9 @@
--
CREATE VIEW public.rtest_v1 AS
-SELECT rtest_t1.a, rtest_t1.b FROM public.rtest_t1;
+ SELECT rtest_t1.a,
+ rtest_t1.b
+ FROM public.rtest_t1;
ALTER TABLE public.rtest_v1 OWNER TO buildfarm;
@@ -47311,7 +47364,11 @@
--
CREATE VIEW public.rtest_vcomp AS
-SELECT x.part, (x.size * y.factor) AS size_in_cm FROM public.rtest_comp x, public.rtest_unitfact y WHERE (x.unit = y.unit);
+ SELECT x.part,
+ (x.size * y.factor) AS size_in_cm
+ FROM public.rtest_comp x,
+ public.rtest_unitfact y
+ WHERE (x.unit = y.unit);
ALTER TABLE public.rtest_vcomp OWNER TO buildfarm;
@@ -47370,7 +47427,12 @@
--
CREATE VIEW public.rtest_vview1 AS
-SELECT x.a, x.b FROM public.rtest_view1 x WHERE (0 < (SELECT count(*) AS count FROM public.rtest_view2 y WHERE (y.a = x.a)));
+ SELECT x.a,
+ x.b
+ FROM public.rtest_view1 x
+ WHERE (0 < ( SELECT count(*) AS count
+ FROM public.rtest_view2 y
+ WHERE (y.a = x.a)));
ALTER TABLE public.rtest_vview1 OWNER TO buildfarm;
@@ -47380,7 +47442,10 @@
--
CREATE VIEW public.rtest_vview2 AS
-SELECT rtest_view1.a, rtest_view1.b FROM public.rtest_view1 WHERE rtest_view1.v;
+ SELECT rtest_view1.a,
+ rtest_view1.b
+ FROM public.rtest_view1
+ WHERE rtest_view1.v;
ALTER TABLE public.rtest_vview2 OWNER TO buildfarm;
@@ -47390,7 +47455,12 @@
--
CREATE VIEW public.rtest_vview3 AS
-SELECT x.a, x.b FROM public.rtest_vview2 x WHERE (0 < (SELECT count(*) AS count FROM public.rtest_view2 y WHERE (y.a = x.a)));
+ SELECT x.a,
+ x.b
+ FROM public.rtest_vview2 x
+ WHERE (0 < ( SELECT count(*) AS count
+ FROM public.rtest_view2 y
+ WHERE (y.a = x.a)));
ALTER TABLE public.rtest_vview3 OWNER TO buildfarm;
@@ -47400,7 +47470,13 @@
--
CREATE VIEW public.rtest_vview4 AS
-SELECT x.a, x.b, count(y.a) AS refcount FROM public.rtest_view1 x, public.rtest_view2 y WHERE (x.a = y.a) GROUP BY x.a, x.b;
+ SELECT x.a,
+ x.b,
+ count(y.a) AS refcount
+ FROM public.rtest_view1 x,
+ public.rtest_view2 y
+ WHERE (x.a = y.a)
+ GROUP BY x.a, x.b;
ALTER TABLE public.rtest_vview4 OWNER TO buildfarm;
@@ -47410,7 +47486,10 @@
--
CREATE VIEW public.rtest_vview5 AS
-SELECT rtest_view1.a, rtest_view1.b, public.rtest_viewfunc1(rtest_view1.a) AS refcount FROM public.rtest_view1;
+ SELECT rtest_view1.a,
+ rtest_view1.b,
+ public.rtest_viewfunc1(rtest_view1.a) AS refcount
+ FROM public.rtest_view1;
ALTER TABLE public.rtest_vview5 OWNER TO buildfarm;
@@ -47537,7 +47616,17 @@
--
CREATE VIEW public.shoe AS
-SELECT sh.shoename, sh.sh_avail, sh.slcolor, sh.slminlen, (sh.slminlen * un.un_fact) AS slminlen_cm, sh.slmaxlen, (sh.slmaxlen * un.un_fact) AS slmaxlen_cm, sh.slunit FROM public.shoe_data sh, public.unit un WHERE (sh.slunit = un.un_name);
+ SELECT sh.shoename,
+ sh.sh_avail,
+ sh.slcolor,
+ sh.slminlen,
+ (sh.slminlen * un.un_fact) AS slminlen_cm,
+ sh.slmaxlen,
+ (sh.slmaxlen * un.un_fact) AS slmaxlen_cm,
+ sh.slunit
+ FROM public.shoe_data sh,
+ public.unit un
+ WHERE (sh.slunit = un.un_name);
ALTER TABLE public.shoe OWNER TO buildfarm;
@@ -47562,7 +47651,15 @@
--
CREATE VIEW public.shoelace AS
-SELECT s.sl_name, s.sl_avail, s.sl_color, s.sl_len, s.sl_unit, (s.sl_len * u.un_fact) AS sl_len_cm FROM public.shoelace_data s, public.unit u WHERE (s.sl_unit = u.un_name);
+ SELECT s.sl_name,
+ s.sl_avail,
+ s.sl_color,
+ s.sl_len,
+ s.sl_unit,
+ (s.sl_len * u.un_fact) AS sl_len_cm
+ FROM public.shoelace_data s,
+ public.unit u
+ WHERE (s.sl_unit = u.un_name);
ALTER TABLE public.shoelace OWNER TO buildfarm;
@@ -47572,7 +47669,14 @@
--
CREATE VIEW public.shoe_ready AS
-SELECT rsh.shoename, rsh.sh_avail, rsl.sl_name, rsl.sl_avail, int4smaller(rsh.sh_avail, rsl.sl_avail) AS total_avail FROM public.shoe rsh, public.shoelace rsl WHERE (((rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm)) AND (rsl.sl_len_cm <= rsh.slmaxlen_cm));
+ SELECT rsh.shoename,
+ rsh.sh_avail,
+ rsl.sl_name,
+ rsl.sl_avail,
+ int4smaller(rsh.sh_avail, rsl.sl_avail) AS total_avail
+ FROM public.shoe rsh,
+ public.shoelace rsl
+ WHERE ((rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm) AND (rsl.sl_len_cm <= rsh.slmaxlen_cm));
ALTER TABLE public.shoe_ready OWNER TO buildfarm;
@@ -47594,7 +47698,16 @@
--
CREATE VIEW public.shoelace_obsolete AS
-SELECT shoelace.sl_name, shoelace.sl_avail, shoelace.sl_color, shoelace.sl_len, shoelace.sl_unit, shoelace.sl_len_cm FROM public.shoelace WHERE (NOT (EXISTS (SELECT shoe.shoename FROM public.shoe WHERE (shoe.slcolor = shoelace.sl_color))));
+ SELECT shoelace.sl_name,
+ shoelace.sl_avail,
+ shoelace.sl_color,
+ shoelace.sl_len,
+ shoelace.sl_unit,
+ shoelace.sl_len_cm
+ FROM public.shoelace
+ WHERE (NOT (EXISTS ( SELECT shoe.shoename
+ FROM public.shoe
+ WHERE (shoe.slcolor = shoelace.sl_color))));
ALTER TABLE public.shoelace_obsolete OWNER TO buildfarm;
@@ -47604,7 +47717,14 @@
--
CREATE VIEW public.shoelace_candelete AS
-SELECT shoelace_obsolete.sl_name, shoelace_obsolete.sl_avail, shoelace_obsolete.sl_color, shoelace_obsolete.sl_len, shoelace_obsolete.sl_unit, shoelace_obsolete.sl_len_cm FROM public.shoelace_obsolete WHERE (shoelace_obsolete.sl_avail = 0);
+ SELECT shoelace_obsolete.sl_name,
+ shoelace_obsolete.sl_avail,
+ shoelace_obsolete.sl_color,
+ shoelace_obsolete.sl_len,
+ shoelace_obsolete.sl_unit,
+ shoelace_obsolete.sl_len_cm
+ FROM public.shoelace_obsolete
+ WHERE (shoelace_obsolete.sl_avail = 0);
ALTER TABLE public.shoelace_candelete OWNER TO buildfarm;
@@ -47651,7 +47771,12 @@
--
CREATE VIEW public.street AS
-SELECT r.name, r.thepath, c.cname FROM ONLY public.road r, public.real_city c WHERE (c.outline OPERATOR(public.##) r.thepath);
+ SELECT r.name,
+ r.thepath,
+ c.cname
+ FROM ONLY public.road r,
+ public.real_city c
+ WHERE (c.outline OPERATOR(public.##) r.thepath);
ALTER TABLE public.street OWNER TO buildfarm;
@@ -47985,7 +48110,11 @@
--
CREATE VIEW public.toyemp AS
-SELECT emp.name, emp.age, emp.location, (12 * emp.salary) AS annualsal FROM public.emp;
+ SELECT emp.name,
+ emp.age,
+ emp.location,
+ (12 * emp.salary) AS annualsal
+ FROM public.emp;
ALTER TABLE public.toyemp OWNER TO buildfarm;
@@ -48138,7 +48267,7 @@
--
CREATE VIEW public.xmlview1 AS
-SELECT xmlcomment('test'::text) AS xmlcomment;
+ SELECT xmlcomment('test'::text) AS xmlcomment;
ALTER TABLE public.xmlview1 OWNER TO buildfarm;
@@ -48148,7 +48277,7 @@
--
CREATE VIEW public.xmlview2 AS
-SELECT XMLCONCAT('hello'::xml, 'you'::xml) AS "xmlconcat";
+ SELECT XMLCONCAT('hello'::xml, 'you'::xml) AS "xmlconcat";
ALTER TABLE public.xmlview2 OWNER TO buildfarm;
@@ -48158,7 +48287,7 @@
--
CREATE VIEW public.xmlview3 AS
-SELECT XMLELEMENT(NAME element, XMLATTRIBUTES(1 AS ":one:", 'deuce' AS two), 'content&') AS "xmlelement";
+ SELECT XMLELEMENT(NAME element, XMLATTRIBUTES(1 AS ":one:", 'deuce' AS two), 'content&') AS "xmlelement";
ALTER TABLE public.xmlview3 OWNER TO buildfarm;
@@ -48168,7 +48297,8 @@
--
CREATE VIEW public.xmlview4 AS
-SELECT XMLELEMENT(NAME employee, XMLFOREST(emp.name AS name, emp.age AS age, emp.salary AS pay)) AS "xmlelement" FROM public.emp;
+ SELECT XMLELEMENT(NAME employee, XMLFOREST(emp.name AS name, emp.age AS age, emp.salary AS pay)) AS "xmlelement"
+ FROM public.emp;
ALTER TABLE public.xmlview4 OWNER TO buildfarm;
@@ -48178,7 +48308,7 @@
--
CREATE VIEW public.xmlview5 AS
-SELECT XMLPARSE(CONTENT '<abc>x</abc>'::text STRIP WHITESPACE) AS "xmlparse";
+ SELECT XMLPARSE(CONTENT '<abc>x</abc>'::text STRIP WHITESPACE) AS "xmlparse";
ALTER TABLE public.xmlview5 OWNER TO buildfarm;
@@ -48188,7 +48318,7 @@
--
CREATE VIEW public.xmlview6 AS
-SELECT XMLPI(NAME foo, 'bar'::text) AS "xmlpi";
+ SELECT XMLPI(NAME foo, 'bar'::text) AS "xmlpi";
ALTER TABLE public.xmlview6 OWNER TO buildfarm;
@@ -48198,7 +48328,7 @@
--
CREATE VIEW public.xmlview7 AS
-SELECT XMLROOT('<foo/>'::xml, VERSION NO VALUE, STANDALONE YES) AS "xmlroot";
+ SELECT XMLROOT('<foo/>'::xml, VERSION NO VALUE, STANDALONE YES) AS "xmlroot";
ALTER TABLE public.xmlview7 OWNER TO buildfarm;
@@ -48208,7 +48338,7 @@
--
CREATE VIEW public.xmlview8 AS
-SELECT (XMLSERIALIZE(CONTENT 'good'::xml AS character(10)))::character(10) AS "xmlserialize";
+ SELECT (XMLSERIALIZE(CONTENT 'good'::xml AS character(10)))::character(10) AS "xmlserialize";
ALTER TABLE public.xmlview8 OWNER TO buildfarm;
@@ -48218,7 +48348,7 @@
--
CREATE VIEW public.xmlview9 AS
-SELECT XMLSERIALIZE(CONTENT 'good'::xml AS text) AS "xmlserialize";
+ SELECT XMLSERIALIZE(CONTENT 'good'::xml AS text) AS "xmlserialize";
ALTER TABLE public.xmlview9 OWNER TO buildfarm;
@@ -211371,9 +211501,9 @@
--
COPY public.test_tsquery (txtkeyword, txtsample, keyword, sample) FROM stdin;
-'New York' new & york | big & apple | nyc 'new' & 'york' ( 'new' & 'york' | 'big' & 'appl' ) | 'nyc'
+'New York' new & york | big & apple | nyc 'new' & 'york' 'new' & 'york' | 'big' & 'appl' | 'nyc'
Moscow moskva | moscow 'moscow' 'moskva' | 'moscow'
-'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' ( 'peterburg' | 'peter' ) | 'sanct' & 'peterburg'
+'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' 'peterburg' | 'peter' | 'sanct' & 'peterburg'
'foo bar qq' foo & (bar | qq) & city 'foo' & 'bar' & 'qq' 'foo' & ( 'bar' | 'qq' ) & 'citi'
\.
@@ -212954,203 +213084,277 @@
-- Name: shoelace_data log_shoelace; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE log_shoelace AS ON UPDATE TO public.shoelace_data WHERE (new.sl_avail <> old.sl_avail) DO INSERT INTO public.shoelace_log (sl_name, sl_avail, log_who, log_when) VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, '1970-01-01 00:00:00'::timestamp without time zone);
+CREATE RULE log_shoelace AS
+ ON UPDATE TO public.shoelace_data
+ WHERE (new.sl_avail <> old.sl_avail) DO INSERT INTO public.shoelace_log (sl_name, sl_avail, log_who, log_when)
+ VALUES (new.sl_name, new.sl_avail, 'Al Bundy'::name, '1970-01-01 00:00:00'::timestamp without time zone);
--
-- Name: ruletest_tbl myrule; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE myrule AS ON INSERT TO public.ruletest_tbl DO INSTEAD INSERT INTO public.ruletest_tbl2 (a, b) VALUES (1000, 1000);
+CREATE RULE myrule AS
+ ON INSERT TO public.ruletest_tbl DO INSTEAD INSERT INTO public.ruletest_tbl2 (a, b)
+ VALUES (1000, 1000);
--
-- Name: rtest_emp rtest_emp_del; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_emp_del AS ON DELETE TO public.rtest_emp DO INSERT INTO public.rtest_emplog (ename, who, action, newsal, oldsal) VALUES (old.ename, "current_user"(), 'fired'::bpchar, '$0.00'::money, old.salary);
+CREATE RULE rtest_emp_del AS
+ ON DELETE TO public.rtest_emp DO INSERT INTO public.rtest_emplog (ename, who, action, newsal, oldsal)
+ VALUES (old.ename, "current_user"(), 'fired'::bpchar, '$0.00'::money, old.salary);
--
-- Name: rtest_emp rtest_emp_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_emp_ins AS ON INSERT TO public.rtest_emp DO INSERT INTO public.rtest_emplog (ename, who, action, newsal, oldsal) VALUES (new.ename, "current_user"(), 'hired'::bpchar, new.salary, '$0.00'::money);
+CREATE RULE rtest_emp_ins AS
+ ON INSERT TO public.rtest_emp DO INSERT INTO public.rtest_emplog (ename, who, action, newsal, oldsal)
+ VALUES (new.ename, "current_user"(), 'hired'::bpchar, new.salary, '$0.00'::money);
--
-- Name: rtest_emp rtest_emp_upd; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_emp_upd AS ON UPDATE TO public.rtest_emp WHERE (new.salary <> old.salary) DO INSERT INTO public.rtest_emplog (ename, who, action, newsal, oldsal) VALUES (new.ename, "current_user"(), 'honored'::bpchar, new.salary, old.salary);
+CREATE RULE rtest_emp_upd AS
+ ON UPDATE TO public.rtest_emp
+ WHERE (new.salary <> old.salary) DO INSERT INTO public.rtest_emplog (ename, who, action, newsal, oldsal)
+ VALUES (new.ename, "current_user"(), 'honored'::bpchar, new.salary, old.salary);
--
-- Name: rtest_nothn1 rtest_nothn_r1; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_nothn_r1 AS ON INSERT TO public.rtest_nothn1 WHERE ((new.a >= 10) AND (new.a < 20)) DO INSTEAD NOTHING;
+CREATE RULE rtest_nothn_r1 AS
+ ON INSERT TO public.rtest_nothn1
+ WHERE ((new.a >= 10) AND (new.a < 20)) DO INSTEAD NOTHING;
--
-- Name: rtest_nothn1 rtest_nothn_r2; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_nothn_r2 AS ON INSERT TO public.rtest_nothn1 WHERE ((new.a >= 30) AND (new.a < 40)) DO INSTEAD NOTHING;
+CREATE RULE rtest_nothn_r2 AS
+ ON INSERT TO public.rtest_nothn1
+ WHERE ((new.a >= 30) AND (new.a < 40)) DO INSTEAD NOTHING;
--
-- Name: rtest_nothn2 rtest_nothn_r3; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_nothn_r3 AS ON INSERT TO public.rtest_nothn2 WHERE (new.a >= 100) DO INSTEAD INSERT INTO public.rtest_nothn3 (a, b) VALUES (new.a, new.b);
+CREATE RULE rtest_nothn_r3 AS
+ ON INSERT TO public.rtest_nothn2
+ WHERE (new.a >= 100) DO INSTEAD INSERT INTO public.rtest_nothn3 (a, b)
+ VALUES (new.a, new.b);
--
-- Name: rtest_nothn2 rtest_nothn_r4; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_nothn_r4 AS ON INSERT TO public.rtest_nothn2 DO INSTEAD NOTHING;
+CREATE RULE rtest_nothn_r4 AS
+ ON INSERT TO public.rtest_nothn2 DO INSTEAD NOTHING;
--
-- Name: rtest_order1 rtest_order_r1; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_order_r1 AS ON INSERT TO public.rtest_order1 DO INSTEAD INSERT INTO public.rtest_order2 (a, b, c) VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 1 - this should run 1st'::text);
+CREATE RULE rtest_order_r1 AS
+ ON INSERT TO public.rtest_order1 DO INSTEAD INSERT INTO public.rtest_order2 (a, b, c)
+ VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 1 - this should run 1st'::text);
--
-- Name: rtest_order1 rtest_order_r2; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_order_r2 AS ON INSERT TO public.rtest_order1 DO INSERT INTO public.rtest_order2 (a, b, c) VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 2 - this should run 2nd'::text);
+CREATE RULE rtest_order_r2 AS
+ ON INSERT TO public.rtest_order1 DO INSERT INTO public.rtest_order2 (a, b, c)
+ VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 2 - this should run 2nd'::text);
--
-- Name: rtest_order1 rtest_order_r3; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_order_r3 AS ON INSERT TO public.rtest_order1 DO INSTEAD INSERT INTO public.rtest_order2 (a, b, c) VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 3 - this should run 3rd'::text);
+CREATE RULE rtest_order_r3 AS
+ ON INSERT TO public.rtest_order1 DO INSTEAD INSERT INTO public.rtest_order2 (a, b, c)
+ VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 3 - this should run 3rd'::text);
--
-- Name: rtest_order1 rtest_order_r4; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_order_r4 AS ON INSERT TO public.rtest_order1 WHERE (new.a < 100) DO INSTEAD INSERT INTO public.rtest_order2 (a, b, c) VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 4 - this should run 4th'::text);
+CREATE RULE rtest_order_r4 AS
+ ON INSERT TO public.rtest_order1
+ WHERE (new.a < 100) DO INSTEAD INSERT INTO public.rtest_order2 (a, b, c)
+ VALUES (new.a, nextval('public.rtest_seq'::regclass), 'rule 4 - this should run 4th'::text);
--
-- Name: rtest_person rtest_pers_del; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_pers_del AS ON DELETE TO public.rtest_person DO DELETE FROM public.rtest_admin WHERE (rtest_admin.pname = old.pname);
+CREATE RULE rtest_pers_del AS
+ ON DELETE TO public.rtest_person DO DELETE FROM public.rtest_admin
+ WHERE (rtest_admin.pname = old.pname);
--
-- Name: rtest_person rtest_pers_upd; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_pers_upd AS ON UPDATE TO public.rtest_person DO UPDATE public.rtest_admin SET pname = new.pname WHERE (rtest_admin.pname = old.pname);
+CREATE RULE rtest_pers_upd AS
+ ON UPDATE TO public.rtest_person DO UPDATE public.rtest_admin SET pname = new.pname
+ WHERE (rtest_admin.pname = old.pname);
--
-- Name: rtest_system rtest_sys_del; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_sys_del AS ON DELETE TO public.rtest_system DO (DELETE FROM public.rtest_interface WHERE (rtest_interface.sysname = old.sysname); DELETE FROM public.rtest_admin WHERE (rtest_admin.sysname = old.sysname); );
+CREATE RULE rtest_sys_del AS
+ ON DELETE TO public.rtest_system DO ( DELETE FROM public.rtest_interface
+ WHERE (rtest_interface.sysname = old.sysname);
+ DELETE FROM public.rtest_admin
+ WHERE (rtest_admin.sysname = old.sysname);
+);
--
-- Name: rtest_system rtest_sys_upd; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_sys_upd AS ON UPDATE TO public.rtest_system DO (UPDATE public.rtest_interface SET sysname = new.sysname WHERE (rtest_interface.sysname = old.sysname); UPDATE public.rtest_admin SET sysname = new.sysname WHERE (rtest_admin.sysname = old.sysname); );
+CREATE RULE rtest_sys_upd AS
+ ON UPDATE TO public.rtest_system DO ( UPDATE public.rtest_interface SET sysname = new.sysname
+ WHERE (rtest_interface.sysname = old.sysname);
+ UPDATE public.rtest_admin SET sysname = new.sysname
+ WHERE (rtest_admin.sysname = old.sysname);
+);
--
-- Name: rtest_t4 rtest_t4_ins1; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_t4_ins1 AS ON INSERT TO public.rtest_t4 WHERE ((new.a >= 10) AND (new.a < 20)) DO INSTEAD INSERT INTO public.rtest_t5 (a, b) VALUES (new.a, new.b);
+CREATE RULE rtest_t4_ins1 AS
+ ON INSERT TO public.rtest_t4
+ WHERE ((new.a >= 10) AND (new.a < 20)) DO INSTEAD INSERT INTO public.rtest_t5 (a, b)
+ VALUES (new.a, new.b);
--
-- Name: rtest_t4 rtest_t4_ins2; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_t4_ins2 AS ON INSERT TO public.rtest_t4 WHERE ((new.a >= 20) AND (new.a < 30)) DO INSERT INTO public.rtest_t6 (a, b) VALUES (new.a, new.b);
+CREATE RULE rtest_t4_ins2 AS
+ ON INSERT TO public.rtest_t4
+ WHERE ((new.a >= 20) AND (new.a < 30)) DO INSERT INTO public.rtest_t6 (a, b)
+ VALUES (new.a, new.b);
--
-- Name: rtest_t5 rtest_t5_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_t5_ins AS ON INSERT TO public.rtest_t5 WHERE (new.a > 15) DO INSERT INTO public.rtest_t7 (a, b) VALUES (new.a, new.b);
+CREATE RULE rtest_t5_ins AS
+ ON INSERT TO public.rtest_t5
+ WHERE (new.a > 15) DO INSERT INTO public.rtest_t7 (a, b)
+ VALUES (new.a, new.b);
--
-- Name: rtest_t6 rtest_t6_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_t6_ins AS ON INSERT TO public.rtest_t6 WHERE (new.a > 25) DO INSTEAD INSERT INTO public.rtest_t8 (a, b) VALUES (new.a, new.b);
+CREATE RULE rtest_t6_ins AS
+ ON INSERT TO public.rtest_t6
+ WHERE (new.a > 25) DO INSTEAD INSERT INTO public.rtest_t8 (a, b)
+ VALUES (new.a, new.b);
--
-- Name: rtest_v1 rtest_v1_del; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_v1_del AS ON DELETE TO public.rtest_v1 DO INSTEAD DELETE FROM public.rtest_t1 WHERE (rtest_t1.a = old.a);
+CREATE RULE rtest_v1_del AS
+ ON DELETE TO public.rtest_v1 DO INSTEAD DELETE FROM public.rtest_t1
+ WHERE (rtest_t1.a = old.a);
--
-- Name: rtest_v1 rtest_v1_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_v1_ins AS ON INSERT TO public.rtest_v1 DO INSTEAD INSERT INTO public.rtest_t1 (a, b) VALUES (new.a, new.b);
+CREATE RULE rtest_v1_ins AS
+ ON INSERT TO public.rtest_v1 DO INSTEAD INSERT INTO public.rtest_t1 (a, b)
+ VALUES (new.a, new.b);
--
-- Name: rtest_v1 rtest_v1_upd; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rtest_v1_upd AS ON UPDATE TO public.rtest_v1 DO INSTEAD UPDATE public.rtest_t1 SET a = new.a, b = new.b WHERE (rtest_t1.a = old.a);
+CREATE RULE rtest_v1_upd AS
+ ON UPDATE TO public.rtest_v1 DO INSTEAD UPDATE public.rtest_t1 SET a = new.a, b = new.b
+ WHERE (rtest_t1.a = old.a);
--
-- Name: rule_and_refint_t3 rule_and_refint_t3_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE rule_and_refint_t3_ins AS ON INSERT TO public.rule_and_refint_t3 WHERE (EXISTS (SELECT 1 FROM public.rule_and_refint_t3 WHERE (((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b)) AND (rule_and_refint_t3.id3c = new.id3c)))) DO INSTEAD UPDATE public.rule_and_refint_t3 SET data = new.data WHERE (((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b)) AND (rule_and_refint_t3.id3c = new.id3c));
+CREATE RULE rule_and_refint_t3_ins AS
+ ON INSERT TO public.rule_and_refint_t3
+ WHERE (EXISTS ( SELECT 1
+ FROM public.rule_and_refint_t3 rule_and_refint_t3_1
+ WHERE ((rule_and_refint_t3_1.id3a = new.id3a) AND (rule_and_refint_t3_1.id3b = new.id3b) AND (rule_and_refint_t3_1.id3c = new.id3c)))) DO INSTEAD UPDATE public.rule_and_refint_t3 SET data = new.data
+ WHERE ((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b) AND (rule_and_refint_t3.id3c = new.id3c));
--
-- Name: shoelace shoelace_del; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE shoelace_del AS ON DELETE TO public.shoelace DO INSTEAD DELETE FROM public.shoelace_data WHERE (shoelace_data.sl_name = old.sl_name);
+CREATE RULE shoelace_del AS
+ ON DELETE TO public.shoelace DO INSTEAD DELETE FROM public.shoelace_data
+ WHERE (shoelace_data.sl_name = old.sl_name);
--
-- Name: shoelace shoelace_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE shoelace_ins AS ON INSERT TO public.shoelace DO INSTEAD INSERT INTO public.shoelace_data (sl_name, sl_avail, sl_color, sl_len, sl_unit) VALUES (new.sl_name, new.sl_avail, new.sl_color, new.sl_len, new.sl_unit);
+CREATE RULE shoelace_ins AS
+ ON INSERT TO public.shoelace DO INSTEAD INSERT INTO public.shoelace_data (sl_name, sl_avail, sl_color, sl_len, sl_unit)
+ VALUES (new.sl_name, new.sl_avail, new.sl_color, new.sl_len, new.sl_unit);
--
-- Name: shoelace_ok shoelace_ok_ins; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE shoelace_ok_ins AS ON INSERT TO public.shoelace_ok DO INSTEAD UPDATE public.shoelace SET sl_avail = (shoelace.sl_avail + new.ok_quant) WHERE (shoelace.sl_name = new.ok_name);
+CREATE RULE shoelace_ok_ins AS
+ ON INSERT TO public.shoelace_ok DO INSTEAD UPDATE public.shoelace SET sl_avail = (shoelace.sl_avail + new.ok_quant)
+ WHERE (shoelace.sl_name = new.ok_name);
--
-- Name: shoelace shoelace_upd; Type: RULE; Schema: public; Owner: buildfarm
--
-CREATE RULE shoelace_upd AS ON UPDATE TO public.shoelace DO INSTEAD UPDATE public.shoelace_data SET sl_name = new.sl_name, sl_avail = new.sl_avail, sl_color = new.sl_color, sl_len = new.sl_len, sl_unit = new.sl_unit WHERE (shoelace_data.sl_name = old.sl_name);
+CREATE RULE shoelace_upd AS
+ ON UPDATE TO public.shoelace DO INSTEAD UPDATE public.shoelace_data SET sl_name = new.sl_name, sl_avail = new.sl_avail, sl_color = new.sl_color, sl_len = new.sl_len, sl_unit = new.sl_unit
+ WHERE (shoelace_data.sl_name = old.sl_name);
--
@@ -213516,22 +213720,10 @@
--
--- Name: DATABASE regression; Type: ACL; Schema: -; Owner: buildfarm
---
-
-REVOKE ALL ON DATABASE regression FROM PUBLIC;
-REVOKE ALL ON DATABASE regression FROM buildfarm;
-GRANT ALL ON DATABASE regression TO buildfarm;
-GRANT CONNECT,TEMPORARY ON DATABASE regression TO PUBLIC;
-
-
---
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -213539,9 +213731,6 @@
-- Name: TABLE my_credit_card_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_normal TO PUBLIC;
@@ -213549,9 +213738,6 @@
-- Name: TABLE my_credit_card_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_secure TO PUBLIC;
@@ -213559,9 +213745,6 @@
-- Name: TABLE my_credit_card_usage_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_normal TO PUBLIC;
@@ -213569,9 +213752,6 @@
-- Name: TABLE my_credit_card_usage_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_secure TO PUBLIC;
@@ -213579,9 +213759,6 @@
-- Name: TABLE my_property_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_property_normal TO PUBLIC;
@@ -213589,9 +213766,6 @@
-- Name: TABLE my_property_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_property_secure TO PUBLIC;
dumpdiff-REL9_3_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL9_3_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL9_3_STABLE.sql.fixed 2021-09-12 16:02:14.283096016 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL9_3_STABLE-to-HEAD.sql.fixed 2021-09-12 16:02:14.302096017 -0400
@@ -62,9 +62,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -1501,9 +1501,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -8634,9 +8634,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -9477,9 +9477,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -27944,9 +27944,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -28198,9 +28198,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31431,9 +31431,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31536,9 +31534,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31641,9 +31637,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31753,9 +31749,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31844,9 +31838,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -31935,9 +31927,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -33147,9 +33139,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -33238,9 +33228,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -35403,9 +35393,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36566,9 +36556,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36727,9 +36717,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -36869,9 +36859,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -37047,9 +37037,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -37961,9 +37949,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -40672,9 +40660,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -40992,9 +40980,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41110,9 +41096,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41215,9 +41199,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -41306,9 +41288,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -42250,9 +42232,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -42297,9 +42277,7 @@
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -42420,7 +42398,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -42443,7 +42422,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -42530,6 +42510,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -42577,6 +42558,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -43021,6 +43003,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -47481,7 +47466,7 @@
x integer,
y text,
z integer,
- CONSTRAINT sequence_con CHECK ((((x > 3) AND (y <> 'check failed'::text)) AND (z < 8)))
+ CONSTRAINT sequence_con CHECK (((x > 3) AND (y <> 'check failed'::text) AND (z < 8)))
);
@@ -47624,7 +47609,7 @@
x integer,
y text,
z integer,
- CONSTRAINT copy_con CHECK ((((x > 3) AND (y <> 'check failed'::text)) AND (x < 7)))
+ CONSTRAINT copy_con CHECK (((x > 3) AND (y <> 'check failed'::text) AND (x < 7)))
);
@@ -48220,8 +48205,8 @@
CREATE TABLE public.insert_tbl (
x integer DEFAULT nextval('public.insert_seq'::regclass),
y text DEFAULT '-NULL-'::text,
- z integer DEFAULT ((-1) * currval('public.insert_seq'::regclass)),
- CONSTRAINT insert_con CHECK ((((x >= 3) AND (y <> 'check failed'::text)) AND (x < 8))),
+ z integer DEFAULT ('-1'::integer * currval('public.insert_seq'::regclass)),
+ CONSTRAINT insert_con CHECK (((x >= 3) AND (y <> 'check failed'::text) AND (x < 8))),
CONSTRAINT insert_tbl_check CHECK (((x + z) = 0))
);
@@ -49598,7 +49583,7 @@
int4smaller(rsh.sh_avail, rsl.sl_avail) AS total_avail
FROM public.shoe rsh,
public.shoelace rsl
- WHERE (((rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm)) AND (rsl.sl_len_cm <= rsh.slmaxlen_cm));
+ WHERE ((rsl.sl_color = rsh.slcolor) AND (rsl.sl_len_cm >= rsh.slminlen_cm) AND (rsl.sl_len_cm <= rsh.slmaxlen_cm));
ALTER TABLE public.shoe_ready OWNER TO buildfarm;
@@ -219801,9 +219786,9 @@
--
COPY public.test_tsquery (txtkeyword, txtsample, keyword, sample) FROM stdin;
-'New York' new & york | big & apple | nyc 'new' & 'york' ( 'new' & 'york' | 'big' & 'appl' ) | 'nyc'
+'New York' new & york | big & apple | nyc 'new' & 'york' 'new' & 'york' | 'big' & 'appl' | 'nyc'
Moscow moskva | moscow 'moscow' 'moskva' | 'moscow'
-'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' ( 'peterburg' | 'peter' ) | 'sanct' & 'peterburg'
+'Sanct Peter' Peterburg | peter | 'Sanct Peterburg' 'sanct' & 'peter' 'peterburg' | 'peter' | 'sanct' & 'peterburg'
'foo bar qq' foo & (bar | qq) & city 'foo' & 'bar' & 'qq' 'foo' & ( 'bar' | 'qq' ) & 'citi'
\.
@@ -221688,8 +221673,8 @@
ON INSERT TO public.rule_and_refint_t3
WHERE (EXISTS ( SELECT 1
FROM public.rule_and_refint_t3 rule_and_refint_t3_1
- WHERE (((rule_and_refint_t3_1.id3a = new.id3a) AND (rule_and_refint_t3_1.id3b = new.id3b)) AND (rule_and_refint_t3_1.id3c = new.id3c)))) DO INSTEAD UPDATE public.rule_and_refint_t3 SET data = new.data
- WHERE (((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b)) AND (rule_and_refint_t3.id3c = new.id3c));
+ WHERE ((rule_and_refint_t3_1.id3a = new.id3a) AND (rule_and_refint_t3_1.id3b = new.id3b) AND (rule_and_refint_t3_1.id3c = new.id3c)))) DO INSTEAD UPDATE public.rule_and_refint_t3 SET data = new.data
+ WHERE ((rule_and_refint_t3.id3a = new.id3a) AND (rule_and_refint_t3.id3b = new.id3b) AND (rule_and_refint_t3.id3c = new.id3c));
--
@@ -222091,22 +222076,10 @@
--
--- Name: DATABASE regression; Type: ACL; Schema: -; Owner: buildfarm
---
-
-REVOKE ALL ON DATABASE regression FROM PUBLIC;
-REVOKE ALL ON DATABASE regression FROM buildfarm;
-GRANT ALL ON DATABASE regression TO buildfarm;
-GRANT CONNECT,TEMPORARY ON DATABASE regression TO PUBLIC;
-
-
---
-- Name: SCHEMA public; Type: ACL; Schema: -; Owner: buildfarm
--
-REVOKE ALL ON SCHEMA public FROM PUBLIC;
-REVOKE ALL ON SCHEMA public FROM buildfarm;
-GRANT ALL ON SCHEMA public TO buildfarm;
+REVOKE USAGE ON SCHEMA public FROM PUBLIC;
GRANT ALL ON SCHEMA public TO PUBLIC;
@@ -222114,9 +222087,6 @@
-- Name: TABLE my_credit_card_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_normal TO PUBLIC;
@@ -222124,9 +222094,6 @@
-- Name: TABLE my_credit_card_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_secure TO PUBLIC;
@@ -222134,9 +222101,6 @@
-- Name: TABLE my_credit_card_usage_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_normal TO PUBLIC;
@@ -222144,9 +222108,6 @@
-- Name: TABLE my_credit_card_usage_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_credit_card_usage_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_credit_card_usage_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_credit_card_usage_secure TO PUBLIC;
@@ -222154,9 +222115,6 @@
-- Name: TABLE my_property_normal; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_normal FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_normal FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_normal TO buildfarm;
GRANT SELECT ON TABLE public.my_property_normal TO PUBLIC;
@@ -222164,9 +222122,6 @@
-- Name: TABLE my_property_secure; Type: ACL; Schema: public; Owner: buildfarm
--
-REVOKE ALL ON TABLE public.my_property_secure FROM PUBLIC;
-REVOKE ALL ON TABLE public.my_property_secure FROM buildfarm;
-GRANT ALL ON TABLE public.my_property_secure TO buildfarm;
GRANT SELECT ON TABLE public.my_property_secure TO PUBLIC;
dumpdiff-REL9_6_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL9_6_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL9_6_STABLE.sql.fixed 2021-09-12 16:06:12.532099754 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL9_6_STABLE-to-HEAD.sql.fixed 2021-09-12 16:06:12.559099754 -0400
@@ -53507,7 +53557,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -53530,7 +53581,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -53617,6 +53669,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -53664,6 +53717,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -54190,6 +54244,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
dumpdiff-REL_10_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL_10_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL_10_STABLE.sql.fixed 2021-09-12 16:07:51.164101299 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL_10_STABLE-to-HEAD.sql.fixed 2021-09-12 16:07:51.194101299 -0400
@@ -166523,7 +166575,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -166546,7 +166599,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -166633,6 +166687,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -166680,6 +166735,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -167218,6 +167274,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
dumpdiff-REL_11_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL_11_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL_11_STABLE.sql.fixed 2021-09-12 16:09:57.235103275 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL_11_STABLE-to-HEAD.sql.fixed 2021-09-12 16:09:57.285103276 -0400
@@ -169304,7 +169352,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE plperl
AS $_$
my ($a, $b, $c) = @_;
@@ -169312,7 +169360,7 @@
$_$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: text_arrayref(); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -170252,7 +170304,7 @@
-- Name: p1(integer, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.p1(v_cnt integer, INOUT v_text text DEFAULT NULL::text)
+CREATE PROCEDURE public.p1(IN v_cnt integer, INOUT v_text text DEFAULT NULL::text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -170261,7 +170313,7 @@
$$;
-ALTER PROCEDURE public.p1(v_cnt integer, INOUT v_text text) OWNER TO buildfarm;
+ALTER PROCEDURE public.p1(IN v_cnt integer, INOUT v_text text) OWNER TO buildfarm;
--
-- Name: read_ordered_int8s(public.ordered_int8s); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -170708,7 +170760,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -170718,13 +170770,13 @@
$$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: test_proc7(integer, integer, numeric); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7(x integer, INOUT a integer, INOUT b numeric)
+CREATE PROCEDURE public.test_proc7(IN x integer, INOUT a integer, INOUT b numeric)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -170737,13 +170789,13 @@
$$;
-ALTER PROCEDURE public.test_proc7(x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7(IN x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
--
-- Name: test_proc7c(integer, integer, numeric); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7c(x integer, INOUT a integer, INOUT b numeric)
+CREATE PROCEDURE public.test_proc7c(IN x integer, INOUT a integer, INOUT b numeric)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -170754,13 +170806,13 @@
$$;
-ALTER PROCEDURE public.test_proc7c(x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7c(IN x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
--
-- Name: test_proc7cc(integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7cc(_x integer)
+CREATE PROCEDURE public.test_proc7cc(IN _x integer)
LANGUAGE plpgsql
AS $$
DECLARE _a int; _b numeric;
@@ -170771,7 +170823,7 @@
$$;
-ALTER PROCEDURE public.test_proc7cc(_x integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7cc(IN _x integer) OWNER TO buildfarm;
--
-- Name: test_proc8a(integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -170880,7 +170932,7 @@
-- Name: transaction_test1(integer, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.transaction_test1(x integer, y text)
+CREATE PROCEDURE public.transaction_test1(IN x integer, IN y text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -170896,7 +170948,7 @@
$$;
-ALTER PROCEDURE public.transaction_test1(x integer, y text) OWNER TO buildfarm;
+ALTER PROCEDURE public.transaction_test1(IN x integer, IN y text) OWNER TO buildfarm;
--
-- Name: transaction_test10a(integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -171056,7 +171108,7 @@
-- Name: transaction_test6(text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.transaction_test6(c text)
+CREATE PROCEDURE public.transaction_test6(IN c text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171065,7 +171117,7 @@
$$;
-ALTER PROCEDURE public.transaction_test6(c text) OWNER TO buildfarm;
+ALTER PROCEDURE public.transaction_test6(IN c text) OWNER TO buildfarm;
--
-- Name: transaction_test7(); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -173142,7 +173198,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE pltcl
AS $_$
set bb [expr $2 * $1]
@@ -173151,7 +173207,7 @@
$_$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: transaction_test1(); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -174092,7 +174148,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -174115,7 +174172,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -174202,6 +174260,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -174249,6 +174308,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -174787,6 +174847,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -176580,7 +176643,7 @@
-- Name: ptest3(text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest3(y text)
+CREATE PROCEDURE public.ptest3(IN y text)
LANGUAGE sql
AS $_$
CALL ptest1(y);
@@ -176588,13 +176651,13 @@
$_$;
-ALTER PROCEDURE public.ptest3(y text) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest3(IN y text) OWNER TO buildfarm;
--
-- Name: ptest5(integer, text, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest5(a integer, b text, c integer DEFAULT 100)
+CREATE PROCEDURE public.ptest5(IN a integer, IN b text, IN c integer DEFAULT 100)
LANGUAGE sql
AS $$
INSERT INTO cp_test VALUES(a, b);
@@ -176602,33 +176665,33 @@
$$;
-ALTER PROCEDURE public.ptest5(a integer, b text, c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest5(IN a integer, IN b text, IN c integer) OWNER TO buildfarm;
--
-- Name: ptest6(integer, anyelement); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest6(a integer, b anyelement)
+CREATE PROCEDURE public.ptest6(IN a integer, IN b anyelement)
LANGUAGE sql
AS $$
SELECT NULL::int;
$$;
-ALTER PROCEDURE public.ptest6(a integer, b anyelement) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest6(IN a integer, IN b anyelement) OWNER TO buildfarm;
--
-- Name: ptest7(text, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest7(a text, b text)
+CREATE PROCEDURE public.ptest7(IN a text, IN b text)
LANGUAGE sql
AS $$
SELECT a = b;
$$;
-ALTER PROCEDURE public.ptest7(a text, b text) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest7(IN a text, IN b text) OWNER TO buildfarm;
--
-- Name: raise_test3(integer); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -179728,6 +179791,8 @@
--
CREATE OPERATOR FAMILY public.part_test_int4_ops USING hash;
+ALTER OPERATOR FAMILY public.part_test_int4_ops USING hash ADD
+ FUNCTION 2 (integer, integer) public.part_hashint4_noop(integer,bigint);
ALTER OPERATOR FAMILY public.part_test_int4_ops USING hash OWNER TO buildfarm;
@@ -179738,8 +179803,7 @@
CREATE OPERATOR CLASS public.part_test_int4_ops
FOR TYPE integer USING hash FAMILY public.part_test_int4_ops AS
- OPERATOR 1 =(integer,integer) ,
- FUNCTION 2 (integer, integer) public.part_hashint4_noop(integer,bigint);
+ OPERATOR 1 =(integer,integer);
ALTER OPERATOR CLASS public.part_test_int4_ops USING hash OWNER TO buildfarm;
@@ -179758,6 +179822,8 @@
--
CREATE OPERATOR FAMILY public.part_test_text_ops USING hash;
+ALTER OPERATOR FAMILY public.part_test_text_ops USING hash ADD
+ FUNCTION 2 (text, text) public.part_hashtext_length(text,bigint);
ALTER OPERATOR FAMILY public.part_test_text_ops USING hash OWNER TO buildfarm;
@@ -179768,8 +179834,7 @@
CREATE OPERATOR CLASS public.part_test_text_ops
FOR TYPE text USING hash FAMILY public.part_test_text_ops AS
- OPERATOR 1 =(text,text) ,
- FUNCTION 2 (text, text) public.part_hashtext_length(text,bigint);
+ OPERATOR 1 =(text,text);
ALTER OPERATOR CLASS public.part_test_text_ops USING hash OWNER TO buildfarm;
dumpdiff-REL_12_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL_12_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL_12_STABLE.sql.fixed 2021-09-12 16:12:00.510107180 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL_12_STABLE-to-HEAD.sql.fixed 2021-09-12 16:12:00.559107182 -0400
@@ -169778,7 +169778,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE plperl
AS $_$
my ($a, $b, $c) = @_;
@@ -169786,7 +169786,7 @@
$_$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: text_arrayref(); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -170757,7 +170757,7 @@
-- Name: p1(integer, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.p1(v_cnt integer, INOUT v_text text DEFAULT NULL::text)
+CREATE PROCEDURE public.p1(IN v_cnt integer, INOUT v_text text DEFAULT NULL::text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -170766,7 +170766,7 @@
$$;
-ALTER PROCEDURE public.p1(v_cnt integer, INOUT v_text text) OWNER TO buildfarm;
+ALTER PROCEDURE public.p1(IN v_cnt integer, INOUT v_text text) OWNER TO buildfarm;
--
-- Name: read_ordered_int8s(public.ordered_int8s); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -171239,7 +171239,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171249,13 +171249,13 @@
$$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: test_proc7(integer, integer, numeric); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7(x integer, INOUT a integer, INOUT b numeric)
+CREATE PROCEDURE public.test_proc7(IN x integer, INOUT a integer, INOUT b numeric)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171268,13 +171268,13 @@
$$;
-ALTER PROCEDURE public.test_proc7(x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7(IN x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
--
-- Name: test_proc7c(integer, integer, numeric); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7c(x integer, INOUT a integer, INOUT b numeric)
+CREATE PROCEDURE public.test_proc7c(IN x integer, INOUT a integer, INOUT b numeric)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171285,13 +171285,13 @@
$$;
-ALTER PROCEDURE public.test_proc7c(x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7c(IN x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
--
-- Name: test_proc7cc(integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7cc(_x integer)
+CREATE PROCEDURE public.test_proc7cc(IN _x integer)
LANGUAGE plpgsql
AS $$
DECLARE _a int; _b numeric;
@@ -171302,7 +171302,7 @@
$$;
-ALTER PROCEDURE public.test_proc7cc(_x integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7cc(IN _x integer) OWNER TO buildfarm;
--
-- Name: test_proc8a(integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -171435,7 +171435,7 @@
-- Name: transaction_test1(integer, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.transaction_test1(x integer, y text)
+CREATE PROCEDURE public.transaction_test1(IN x integer, IN y text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171451,7 +171451,7 @@
$$;
-ALTER PROCEDURE public.transaction_test1(x integer, y text) OWNER TO buildfarm;
+ALTER PROCEDURE public.transaction_test1(IN x integer, IN y text) OWNER TO buildfarm;
--
-- Name: transaction_test10a(integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -171611,7 +171611,7 @@
-- Name: transaction_test6(text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.transaction_test6(c text)
+CREATE PROCEDURE public.transaction_test6(IN c text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171620,7 +171620,7 @@
$$;
-ALTER PROCEDURE public.transaction_test6(c text) OWNER TO buildfarm;
+ALTER PROCEDURE public.transaction_test6(IN c text) OWNER TO buildfarm;
--
-- Name: transaction_test7(); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -173932,7 +173932,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE pltcl
AS $_$
set bb [expr $2 * $1]
@@ -173941,7 +173941,7 @@
$_$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: transaction_test1(); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -174975,7 +174975,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -174998,7 +174999,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -175085,6 +175087,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -175132,6 +175135,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -175680,6 +175684,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -177545,7 +177550,7 @@
-- Name: ptest3(text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest3(y text)
+CREATE PROCEDURE public.ptest3(IN y text)
LANGUAGE sql
AS $_$
CALL ptest1(y);
@@ -177553,13 +177558,13 @@
$_$;
-ALTER PROCEDURE public.ptest3(y text) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest3(IN y text) OWNER TO buildfarm;
--
-- Name: ptest5(integer, text, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest5(a integer, b text, c integer DEFAULT 100)
+CREATE PROCEDURE public.ptest5(IN a integer, IN b text, IN c integer DEFAULT 100)
LANGUAGE sql
AS $$
INSERT INTO cp_test VALUES(a, b);
@@ -177567,33 +177572,33 @@
$$;
-ALTER PROCEDURE public.ptest5(a integer, b text, c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest5(IN a integer, IN b text, IN c integer) OWNER TO buildfarm;
--
-- Name: ptest6(integer, anyelement); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest6(a integer, b anyelement)
+CREATE PROCEDURE public.ptest6(IN a integer, IN b anyelement)
LANGUAGE sql
AS $$
SELECT NULL::int;
$$;
-ALTER PROCEDURE public.ptest6(a integer, b anyelement) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest6(IN a integer, IN b anyelement) OWNER TO buildfarm;
--
-- Name: ptest7(text, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest7(a text, b text)
+CREATE PROCEDURE public.ptest7(IN a text, IN b text)
LANGUAGE sql
AS $$
SELECT a = b;
$$;
-ALTER PROCEDURE public.ptest7(a text, b text) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest7(IN a text, IN b text) OWNER TO buildfarm;
--
-- Name: raise_test3(integer); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -180561,6 +180566,8 @@
--
CREATE OPERATOR FAMILY public.part_test_int4_ops USING hash;
+ALTER OPERATOR FAMILY public.part_test_int4_ops USING hash ADD
+ FUNCTION 2 (integer, integer) public.part_hashint4_noop(integer,bigint);
ALTER OPERATOR FAMILY public.part_test_int4_ops USING hash OWNER TO buildfarm;
@@ -180571,8 +180578,7 @@
CREATE OPERATOR CLASS public.part_test_int4_ops
FOR TYPE integer USING hash FAMILY public.part_test_int4_ops AS
- OPERATOR 1 =(integer,integer) ,
- FUNCTION 2 (integer, integer) public.part_hashint4_noop(integer,bigint);
+ OPERATOR 1 =(integer,integer);
ALTER OPERATOR CLASS public.part_test_int4_ops USING hash OWNER TO buildfarm;
@@ -180591,6 +180597,8 @@
--
CREATE OPERATOR FAMILY public.part_test_text_ops USING hash;
+ALTER OPERATOR FAMILY public.part_test_text_ops USING hash ADD
+ FUNCTION 2 (text, text) public.part_hashtext_length(text,bigint);
ALTER OPERATOR FAMILY public.part_test_text_ops USING hash OWNER TO buildfarm;
@@ -180601,8 +180609,7 @@
CREATE OPERATOR CLASS public.part_test_text_ops
FOR TYPE text USING hash FAMILY public.part_test_text_ops AS
- OPERATOR 1 =(text,text) ,
- FUNCTION 2 (text, text) public.part_hashtext_length(text,bigint);
+ OPERATOR 1 =(text,text);
ALTER OPERATOR CLASS public.part_test_text_ops USING hash OWNER TO buildfarm;
dumpdiff-REL_13_STABLEtext/plain; charset=UTF-8; name=dumpdiff-REL_13_STABLEDownload
--- /home/andrew/bf/root/upgrade.crake/HEAD/origin-REL_13_STABLE.sql.fixed 2021-09-12 16:14:12.048111840 -0400
+++ /home/andrew/bf/root/upgrade.crake/HEAD/converted-REL_13_STABLE-to-HEAD.sql.fixed 2021-09-12 16:14:12.100111841 -0400
@@ -170167,7 +170167,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE plperl
AS $_$
my ($a, $b, $c) = @_;
@@ -170175,7 +170175,7 @@
$_$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: text_arrayref(); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -171155,7 +171155,7 @@
-- Name: p1(integer, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.p1(v_cnt integer, INOUT v_text text DEFAULT NULL::text)
+CREATE PROCEDURE public.p1(IN v_cnt integer, INOUT v_text text DEFAULT NULL::text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171164,7 +171164,7 @@
$$;
-ALTER PROCEDURE public.p1(v_cnt integer, INOUT v_text text) OWNER TO buildfarm;
+ALTER PROCEDURE public.p1(IN v_cnt integer, INOUT v_text text) OWNER TO buildfarm;
--
-- Name: read_ordered_int8s(public.ordered_int8s); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -171686,7 +171686,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171696,13 +171696,13 @@
$$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: test_proc7(integer, integer, numeric); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7(x integer, INOUT a integer, INOUT b numeric)
+CREATE PROCEDURE public.test_proc7(IN x integer, INOUT a integer, INOUT b numeric)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171715,13 +171715,13 @@
$$;
-ALTER PROCEDURE public.test_proc7(x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7(IN x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
--
-- Name: test_proc7c(integer, integer, numeric); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7c(x integer, INOUT a integer, INOUT b numeric)
+CREATE PROCEDURE public.test_proc7c(IN x integer, INOUT a integer, INOUT b numeric)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171732,13 +171732,13 @@
$$;
-ALTER PROCEDURE public.test_proc7c(x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7c(IN x integer, INOUT a integer, INOUT b numeric) OWNER TO buildfarm;
--
-- Name: test_proc7cc(integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc7cc(_x integer)
+CREATE PROCEDURE public.test_proc7cc(IN _x integer)
LANGUAGE plpgsql
AS $$
DECLARE _a int; _b numeric;
@@ -171749,7 +171749,7 @@
$$;
-ALTER PROCEDURE public.test_proc7cc(_x integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc7cc(IN _x integer) OWNER TO buildfarm;
--
-- Name: test_proc8a(integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -171882,7 +171882,7 @@
-- Name: transaction_test1(integer, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.transaction_test1(x integer, y text)
+CREATE PROCEDURE public.transaction_test1(IN x integer, IN y text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -171898,7 +171898,7 @@
$$;
-ALTER PROCEDURE public.transaction_test1(x integer, y text) OWNER TO buildfarm;
+ALTER PROCEDURE public.transaction_test1(IN x integer, IN y text) OWNER TO buildfarm;
--
-- Name: transaction_test10a(integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -172058,7 +172058,7 @@
-- Name: transaction_test6(text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.transaction_test6(c text)
+CREATE PROCEDURE public.transaction_test6(IN c text)
LANGUAGE plpgsql
AS $$
BEGIN
@@ -172067,7 +172067,7 @@
$$;
-ALTER PROCEDURE public.transaction_test6(c text) OWNER TO buildfarm;
+ALTER PROCEDURE public.transaction_test6(IN c text) OWNER TO buildfarm;
--
-- Name: transaction_test7(); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -174390,7 +174390,7 @@
-- Name: test_proc6(integer, integer, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer)
+CREATE PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer)
LANGUAGE pltcl
AS $_$
set bb [expr $2 * $1]
@@ -174399,7 +174399,7 @@
$_$;
-ALTER PROCEDURE public.test_proc6(a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.test_proc6(IN a integer, INOUT b integer, INOUT c integer) OWNER TO buildfarm;
--
-- Name: transaction_test1(); Type: PROCEDURE; Schema: public; Owner: buildfarm
@@ -175433,7 +175433,8 @@
--
CREATE TYPE public.arrayrange AS RANGE (
- subtype = integer[]
+ subtype = integer[],
+ multirange_type_name = public.arraymultirange
);
@@ -175456,7 +175457,8 @@
--
CREATE TYPE public.cashrange AS RANGE (
- subtype = money
+ subtype = money,
+ multirange_type_name = public.cashmultirange
);
@@ -175543,6 +175545,7 @@
INTERNALLENGTH = 16,
INPUT = public.int44in,
OUTPUT = public.int44out,
+ SUBSCRIPT = raw_array_subscript_handler,
ELEMENT = integer,
CATEGORY = 'x',
PREFERRED = true,
@@ -175590,6 +175593,7 @@
CREATE TYPE public.float8range AS RANGE (
subtype = double precision,
+ multirange_type_name = public.float8multirange,
subtype_diff = float8mi
);
@@ -176138,6 +176142,7 @@
CREATE TYPE public.textrange AS RANGE (
subtype = text,
+ multirange_type_name = public.textmultirange,
collation = pg_catalog."C"
);
@@ -178259,7 +178264,7 @@
-- Name: ptest3(text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest3(y text)
+CREATE PROCEDURE public.ptest3(IN y text)
LANGUAGE sql
AS $_$
CALL ptest1(y);
@@ -178267,13 +178272,13 @@
$_$;
-ALTER PROCEDURE public.ptest3(y text) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest3(IN y text) OWNER TO buildfarm;
--
-- Name: ptest5(integer, text, integer); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest5(a integer, b text, c integer DEFAULT 100)
+CREATE PROCEDURE public.ptest5(IN a integer, IN b text, IN c integer DEFAULT 100)
LANGUAGE sql
AS $$
INSERT INTO cp_test VALUES(a, b);
@@ -178281,33 +178286,33 @@
$$;
-ALTER PROCEDURE public.ptest5(a integer, b text, c integer) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest5(IN a integer, IN b text, IN c integer) OWNER TO buildfarm;
--
-- Name: ptest6(integer, anyelement); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest6(a integer, b anyelement)
+CREATE PROCEDURE public.ptest6(IN a integer, IN b anyelement)
LANGUAGE sql
AS $$
SELECT NULL::int;
$$;
-ALTER PROCEDURE public.ptest6(a integer, b anyelement) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest6(IN a integer, IN b anyelement) OWNER TO buildfarm;
--
-- Name: ptest7(text, text); Type: PROCEDURE; Schema: public; Owner: buildfarm
--
-CREATE PROCEDURE public.ptest7(a text, b text)
+CREATE PROCEDURE public.ptest7(IN a text, IN b text)
LANGUAGE sql
AS $$
SELECT a = b;
$$;
-ALTER PROCEDURE public.ptest7(a text, b text) OWNER TO buildfarm;
+ALTER PROCEDURE public.ptest7(IN a text, IN b text) OWNER TO buildfarm;
--
-- Name: raise_test3(integer); Type: FUNCTION; Schema: public; Owner: buildfarm
@@ -181298,6 +181303,8 @@
--
CREATE OPERATOR FAMILY public.part_test_int4_ops USING hash;
+ALTER OPERATOR FAMILY public.part_test_int4_ops USING hash ADD
+ FUNCTION 2 (integer, integer) public.part_hashint4_noop(integer,bigint);
ALTER OPERATOR FAMILY public.part_test_int4_ops USING hash OWNER TO buildfarm;
@@ -181308,8 +181315,7 @@
CREATE OPERATOR CLASS public.part_test_int4_ops
FOR TYPE integer USING hash FAMILY public.part_test_int4_ops AS
- OPERATOR 1 =(integer,integer) ,
- FUNCTION 2 (integer, integer) public.part_hashint4_noop(integer,bigint);
+ OPERATOR 1 =(integer,integer);
ALTER OPERATOR CLASS public.part_test_int4_ops USING hash OWNER TO buildfarm;
@@ -181328,6 +181334,8 @@
--
CREATE OPERATOR FAMILY public.part_test_text_ops USING hash;
+ALTER OPERATOR FAMILY public.part_test_text_ops USING hash ADD
+ FUNCTION 2 (text, text) public.part_hashtext_length(text,bigint);
ALTER OPERATOR FAMILY public.part_test_text_ops USING hash OWNER TO buildfarm;
@@ -181338,8 +181346,7 @@
CREATE OPERATOR CLASS public.part_test_text_ops
FOR TYPE text USING hash FAMILY public.part_test_text_ops AS
- OPERATOR 1 =(text,text) ,
- FUNCTION 2 (text, text) public.part_hashtext_length(text,bigint);
+ OPERATOR 1 =(text,text);
ALTER OPERATOR CLASS public.part_test_text_ops USING hash OWNER TO buildfarm;
On 9/13/21 9:20 AM, Andrew Dunstan wrote:
On 9/12/21 2:41 PM, Andrew Dunstan wrote:
On 9/11/21 8:51 PM, Justin Pryzby wrote:
@Andrew: did you have any comment on this part ?
|Subject: buildfarm xversion diff
|Forking /messages/by-id/20210328231433.GI15100@telsasoft.com
|
|I gave suggestion how to reduce the "lines of diff" metric almost to nothing,
|allowing a very small "fudge factor", and which I think makes this a pretty
|good metric rather than a passable one.Somehow I missed that. Looks like some good suggestions. I'll
experiment. (Note: we can't assume the presence of sed, especially on
Windows).I tried with the attached patch on crake, which tests back as far as
9.2. Here are the diff counts from HEAD:andrew@emma:HEAD $ grep -c '^[+-]' dumpdiff-REL9_* dumpdiff-REL_1*
dumpdiff-HEAD
dumpdiff-REL9_2_STABLE:514
dumpdiff-REL9_3_STABLE:169
dumpdiff-REL9_4_STABLE:185
dumpdiff-REL9_5_STABLE:221
dumpdiff-REL9_6_STABLE:11
dumpdiff-REL_10_STABLE:11
dumpdiff-REL_11_STABLE:73
dumpdiff-REL_12_STABLE:73
dumpdiff-REL_13_STABLE:73
dumpdiff-REL_14_STABLE:0
dumpdiff-HEAD:0I've also attached those non-empty dumpdiff files for information, since
they are quite small.There is still work to do, but this is promising. Next step: try it on
Windows.
It appears to do the right thing on Windows. yay!
We probably need to get smarter about the heuristics, though, e.g. by
taking into account the buildfarm options and the platform. It would
also help a lot if we could make vcregress.pl honor USE_MODULE_DB.
That's on my TODO list, but it just got a lot higher priority.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On 9/15/21 3:28 PM, Andrew Dunstan wrote:
On 9/13/21 9:20 AM, Andrew Dunstan wrote:
On 9/12/21 2:41 PM, Andrew Dunstan wrote:
On 9/11/21 8:51 PM, Justin Pryzby wrote:
@Andrew: did you have any comment on this part ?
|Subject: buildfarm xversion diff
|Forking /messages/by-id/20210328231433.GI15100@telsasoft.com
|
|I gave suggestion how to reduce the "lines of diff" metric almost to nothing,
|allowing a very small "fudge factor", and which I think makes this a pretty
|good metric rather than a passable one.Somehow I missed that. Looks like some good suggestions. I'll
experiment. (Note: we can't assume the presence of sed, especially on
Windows).I tried with the attached patch on crake, which tests back as far as
9.2. Here are the diff counts from HEAD:andrew@emma:HEAD $ grep -c '^[+-]' dumpdiff-REL9_* dumpdiff-REL_1*
dumpdiff-HEAD
dumpdiff-REL9_2_STABLE:514
dumpdiff-REL9_3_STABLE:169
dumpdiff-REL9_4_STABLE:185
dumpdiff-REL9_5_STABLE:221
dumpdiff-REL9_6_STABLE:11
dumpdiff-REL_10_STABLE:11
dumpdiff-REL_11_STABLE:73
dumpdiff-REL_12_STABLE:73
dumpdiff-REL_13_STABLE:73
dumpdiff-REL_14_STABLE:0
dumpdiff-HEAD:0I've also attached those non-empty dumpdiff files for information, since
they are quite small.There is still work to do, but this is promising. Next step: try it on
Windows.It appears to do the right thing on Windows. yay!
We probably need to get smarter about the heuristics, though, e.g. by
taking into account the buildfarm options and the platform. It would
also help a lot if we could make vcregress.pl honor USE_MODULE_DB.
That's on my TODO list, but it just got a lot higher priority.
Here's what I've committed:
<https://github.com/PGBuildFarm/client-code/commit/6317d82c0e897a29dabd57ed8159d13920401f96>
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Sat, Sep 11, 2021 at 07:51:16PM -0500, Justin Pryzby wrote:
These are all "translated" from test.sh, so follow its logic.
Maybe it should be improved, but that's separate from this patch - which is
already doing a few unrelated things.
I was looking at this CF entry, and what you are doing in 0004 to move
the tweaks from pg_upgrade's test.sh to a separate SQL script that
uses psql's meta-commands like \if to check which version we are on is
really interesting. The patch does not apply anymore, so this needs a
rebase. The entry has been switched as waiting on author by Tom, but
you did not update it after sending the new versions in [1]/messages/by-id/20210912005116.GF26465@telsasoft.com -- Michael. I am
wondering if we could have something cleaner than just a set booleans
as you do here for each check, as that does not help with the
readability of the tests.
[1]: /messages/by-id/20210912005116.GF26465@telsasoft.com -- Michael
--
Michael
On Fri, Oct 01, 2021 at 04:58:41PM +0900, Michael Paquier wrote:
I was looking at this CF entry, and what you are doing in 0004 to move
the tweaks from pg_upgrade's test.sh to a separate SQL script that
uses psql's meta-commands like \if to check which version we are on is
really interesting. The patch does not apply anymore, so this needs a
rebase. The entry has been switched as waiting on author by Tom, but
you did not update it after sending the new versions in [1]. I am
wondering if we could have something cleaner than just a set booleans
as you do here for each check, as that does not help with the
readability of the tests.
And so, I am back at this thread, looking at the set of patches
proposed from 0001 to 0004. The patches are rather messy and mix many
things and concepts, but there are basically four things that stand
out:
- test.sh is completely broken when using PG >= 14 as new version
because of the removal of the test tablespace. Older versions of
pg_regress don't support --make-tablespacedir so I am fine to stick a
couple of extra mkdirs for testtablespace/, expected/ and sql/ to
allow the script to work properly for major upgrades as a workaround,
but only if we use an old version. We need to do something here for
HEAD and REL_14_STABLE.
- The script would fail when using PG <= 11 as old version because of
WITH OIDS relations. We need to do something down to REL_12_STABLE.
I did not like much the approach taken to stick 4 ALTER TABLE queries
though (the patch was actually failing here for me), so instead I have
borrowed what the buildfarm has been doing with a DO block. That
works fine, and that's more portable.
- Not using --extra-float-digits with PG <= 11 as older version causes
a bunch of diffs in the dumps, making the whole unreadable. The patch
was doing that unconditionally for *all version*, which is not good.
We should only do that on the versions that need it, and we know the
old version number before taking any dumps so that's easy to check.
- The addition of --wal-segsize and --allow-group-access breaks the
script when using PG < 10 at initdb time as these got added in 11.
With 10 getting EOL'd next year and per the lack of complaints, I am
not excited to do anything here and I'd rather leave this out so as we
keep coverage for those options across *all* major versions upgraded
from 11~. The buildfarm has tests down to 9.2, but for devs my take
is that this is enough for now.
This is for the basics in terms of fixing test.sh and what should be
backpatched. In this aspect patches 0001 and 0002 were a bit
incorrect. I am not sure that 0003 is that interesting as designed as
we would miss any new core types introduced.
0004 is something I'd like to get done on HEAD to ease the move of the
pg_upgrade tests to TAP, but it could be made a bit easier to read by
not having all those oldpgversion_XX_YY flags grouped together for
one. So I am going to rewrite portions of it once done with the
above.
For now, attached is a patch to address the issues with test.sh that I
am planning to backpatch. This fixes the facility on HEAD, while
minimizing the diffs between the dumps. We could do more, like a
s/PROCEDURE/FUNCTION/ but that does not make the diffs really
unreadable either. I have only tested that on HEAD as new version
down to 11 as the oldest version per the business with --wal-segsize.
This still needs tests with 12~ as new version though, which is boring
but not complicated at all :)
--
Michael
Attachments:
upgrade-test-fixes.patchtext/x-diff; charset=us-asciiDownload
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 1ba326decd..8593488907 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -23,7 +23,8 @@ standard_initdb() {
# To increase coverage of non-standard segment size and group access
# without increasing test runtime, run these tests with a custom setting.
# Also, specify "-A trust" explicitly to suppress initdb's warning.
- "$1" -N --wal-segsize 1 -g -A trust
+ # --allow-group-access and --wal-segsize have been added in v11.
+ "$1" -N --wal-segsize 1 --allow-group-access -A trust
if [ -n "$TEMP_CONFIG" -a -r "$TEMP_CONFIG" ]
then
cat "$TEMP_CONFIG" >> "$PGDATA/postgresql.conf"
@@ -107,6 +108,14 @@ EXTRA_REGRESS_OPTS="$EXTRA_REGRESS_OPTS --outputdir=$outputdir"
export EXTRA_REGRESS_OPTS
mkdir "$outputdir"
+# pg_regress --make-tablespacedir would take care of that in 14~, but this is
+# still required for older versions where this option is not supported.
+if [ "$newsrc" != "$oldsrc" ]; then
+ mkdir "$outputdir"/testtablespace
+ mkdir "$outputdir"/sql
+ mkdir "$outputdir"/expected
+fi
+
logdir=`pwd`/log
rm -rf "$logdir"
mkdir "$logdir"
@@ -163,20 +172,32 @@ createdb "regression$dbname1" || createdb_status=$?
createdb "regression$dbname2" || createdb_status=$?
createdb "regression$dbname3" || createdb_status=$?
+# Extra options to apply to the dump. This may be changed later.
+extra_dump_options=""
+
if "$MAKE" -C "$oldsrc" installcheck-parallel; then
oldpgversion=`psql -X -A -t -d regression -c "SHOW server_version_num"`
- # before dumping, get rid of objects not feasible in later versions
+ # Before dumping, tweak the database of the old instance depending
+ # on its version.
if [ "$newsrc" != "$oldsrc" ]; then
fix_sql=""
+ # Get rid of objects not feasible in later versions
case $oldpgversion in
804??)
fix_sql="DROP FUNCTION public.myfunc(integer);"
;;
esac
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text); -- last in 9.6
+
+ # Last appeared in v9.6
+ if [ $oldpgversion -lt 100000 ]; then
+ fix_sql="$fix_sql
+ DROP FUNCTION IF EXISTS
+ public.oldstyle_length(integer, text);"
+ fi
+ # Last appeared in v13
+ if [ $oldpgversion -lt 140000 ]; then
+ fix_sql="$fix_sql
DROP FUNCTION IF EXISTS
public.putenv(text); -- last in v13
DROP OPERATOR IF EXISTS -- last in v13
@@ -184,10 +205,40 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
public.#%# (pg_catalog.int8, NONE),
public.!=- (pg_catalog.int8, NONE),
public.#@%# (pg_catalog.int8, NONE);"
+ fi
psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
+
+ # WITH OIDS is not supported anymore in v12, so remove support
+ # for any relations marked as such.
+ if [ $oldpgversion -lt 120000 ]; then
+ fix_sql="DO \$stmt\$
+ DECLARE
+ rec text;
+ BEGIN
+ FOR rec in
+ SELECT oid::regclass::text
+ FROM pg_class
+ WHERE relname !~ '^pg_'
+ AND relhasoids
+ AND relkind in ('r','m')
+ ORDER BY 1
+ LOOP
+ execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
+ END LOOP;
+ END; \$stmt\$;"
+ psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
+ fi
+
+ # Handling of --extra-float-digits gets messy after v12.
+ # Note that this changes the dumps from the old and new
+ # instances if involving an old cluster of v11 or older.
+ if [ $oldpgversion -lt 120000 ]; then
+ extra_dump_options="--extra-float-digits=0"
+ fi
fi
- pg_dumpall --no-sync -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
+ pg_dumpall $extra_dump_options --no-sync \
+ -f "$temp_root"/dump1.sql || pg_dumpall1_status=$?
if [ "$newsrc" != "$oldsrc" ]; then
# update references to old source tree's regress.so etc
@@ -249,7 +300,8 @@ esac
pg_ctl start -l "$logdir/postmaster2.log" -o "$POSTMASTER_OPTS" -w
-pg_dumpall --no-sync -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
+pg_dumpall $extra_dump_options --no-sync \
+ -f "$temp_root"/dump2.sql || pg_dumpall2_status=$?
pg_ctl -m fast stop
if [ -n "$pg_dumpall2_status" ]; then
On Mon, Oct 11, 2021 at 02:38:12PM +0900, Michael Paquier wrote:
For now, attached is a patch to address the issues with test.sh that I
am planning to backpatch. This fixes the facility on HEAD, while
minimizing the diffs between the dumps. We could do more, like a
s/PROCEDURE/FUNCTION/ but that does not make the diffs really
unreadable either. I have only tested that on HEAD as new version
down to 11 as the oldest version per the business with --wal-segsize.
This still needs tests with 12~ as new version though, which is boring
but not complicated at all :)
Okay, tested and done as of fa66b6d.
--
Michael
On Mon, Oct 11, 2021 at 02:38:12PM +0900, Michael Paquier wrote:
On Fri, Oct 01, 2021 at 04:58:41PM +0900, Michael Paquier wrote:
I was looking at this CF entry, and what you are doing in 0004 to move
the tweaks from pg_upgrade's test.sh to a separate SQL script that
uses psql's meta-commands like \if to check which version we are on is
really interesting. The patch does not apply anymore, so this needs a
rebase. The entry has been switched as waiting on author by Tom, but
you did not update it after sending the new versions in [1]. I am
wondering if we could have something cleaner than just a set booleans
as you do here for each check, as that does not help with the
readability of the tests.And so, I am back at this thread, looking at the set of patches
proposed from 0001 to 0004. The patches are rather messy and mix many
things and concepts, but there are basically four things that stand
out:
- test.sh is completely broken when using PG >= 14 as new version
because of the removal of the test tablespace. Older versions of
pg_regress don't support --make-tablespacedir so I am fine to stick a
couple of extra mkdirs for testtablespace/, expected/ and sql/ to
allow the script to work properly for major upgrades as a workaround,
but only if we use an old version. We need to do something here for
HEAD and REL_14_STABLE.
- The script would fail when using PG <= 11 as old version because of
WITH OIDS relations. We need to do something down to REL_12_STABLE.
I did not like much the approach taken to stick 4 ALTER TABLE queries
though (the patch was actually failing here for me), so instead I have
borrowed what the buildfarm has been doing with a DO block. That
works fine, and that's more portable.
- Not using --extra-float-digits with PG <= 11 as older version causes
a bunch of diffs in the dumps, making the whole unreadable. The patch
was doing that unconditionally for *all version*, which is not good.
We should only do that on the versions that need it, and we know the
old version number before taking any dumps so that's easy to check.
- The addition of --wal-segsize and --allow-group-access breaks the
script when using PG < 10 at initdb time as these got added in 11.
With 10 getting EOL'd next year and per the lack of complaints, I am
not excited to do anything here and I'd rather leave this out so as we
keep coverage for those options across *all* major versions upgraded
from 11~. The buildfarm has tests down to 9.2, but for devs my take
is that this is enough for now.
Michael handled those in fa66b6d.
Note that the patch assumes that the "old version" being pg_upgraded has
commit 97f73a978: "Work around cross-version-upgrade issues created by commit 9e38c2bb5."
That may be good enough for test.sh, but if the kludges were moved to a .sql
script which was also run by the buildfarm (in stead of its hardcoded kludges), then
it might be necessary to handle the additional stuff my patch did, like:
+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;"
+ DROP FUNCTION boxarea(box);"
+ DROP FUNCTION funny_dup17();"
+ DROP TABLE abstime_tbl;"
+ DROP TABLE reltime_tbl;"
+ DROP TABLE tinterval_tbl;"
+ DROP AGGREGATE first_el_agg_any(anyelement);"
+ DROP AGGREGATE array_cat_accum(anyarray);"
+ DROP OPERATOR @#@(NONE,bigint);"
Or, maybe it's guaranteed that the animals all run latest version of old
branches, in which case I think some of the BF's existing logic could be
dropped, which would help to reconcile these two scripts:
my $missing_funcs = q{drop function if exists public.boxarea(box);
drop function if exists public.funny_dup17();
..
$prstmt = join(';',
'drop operator @#@ (NONE, bigint)',
..
'drop aggregate if exists public.array_cat_accum(anyarray)',
This is for the basics in terms of fixing test.sh and what should be
backpatched. In this aspect patches 0001 and 0002 were a bit
incorrect. I am not sure that 0003 is that interesting as designed as
we would miss any new core types introduced.
We wouldn't miss new core types, because of the 2nd part of type_sanity which
tests that each core type was included in the "manytypes" table.
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list', 'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
0004 is something I'd like to get done on HEAD to ease the move of the
pg_upgrade tests to TAP, but it could be made a bit easier to read by
not having all those oldpgversion_XX_YY flags grouped together for
one. So I am going to rewrite portions of it once done with the
above.
--
Justin
Attachments:
v6-0001-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From 50261556825655c0f78459dd2a1cc310d88f55d6 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 5 Dec 2020 17:20:09 -0600
Subject: [PATCH v6 1/2] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes to notice if the
binary format is accidentally changed again, as happened at:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
I checked that if I cherry-pick to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 55 ++++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 54 +++++++++++++++++++++
3 files changed, 110 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index d04dc66db9..b4880ea3af 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -69,6 +69,7 @@ line_tbl|f
log_table|f
lseg_tbl|f
main_table|f
+manytypes|f
mlparted|f
mlparted1|f
mlparted11|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index f567fd378e..58013a8df3 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -674,3 +674,58 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
----------+------------+---------------
(0 rows)
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'foo'::"char", 'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type, 'pg_monitor'::regrole,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, '16/B374D848'::pg_lsn,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no,
+'venus'::planets, 'i16'::insenum,
+'(1,2)'::int4range, '{(1,2)}'::int4multirange,
+'(3,4)'::int8range, '{(3,4)}'::int8multirange,
+'(1,2)'::float8range, '{(1,2)}'::float8multirange,
+'(3,4)'::numrange, '{(3,4)}'::nummultirange,
+'(a,b)'::textrange, '{(a,b)}'::textmultirange,
+'(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+'(2020-01-02, 2021-02-03)'::daterange,
+'{(2020-01-02, 2021-02-03)}'::datemultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+arrayrange(ARRAY[1,2], ARRAY[2,1]),
+arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list', 'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
+ oid | typname | typtype | typelem | typarray | typarray
+-----+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 404c3a2043..e98191f01f 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -495,3 +495,57 @@ WHERE pronargs != 2
SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid
FROM pg_range p1
WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
+
+-- Create a table with different data types, to exercise binary compatibility
+-- during pg_upgrade test
+
+CREATE TABLE manytypes AS SELECT
+'(11,12)'::point, '(1,1),(2,2)'::line,
+'((11,11),(12,12))'::lseg, '((11,11),(13,13))'::box,
+'((11,12),(13,13),(14,14))'::path AS openedpath, '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+'((11,12),(13,13),(14,14))'::polygon, '1,1,1'::circle,
+'today'::date, 'now'::time, 'now'::timestamp, 'now'::timetz, 'now'::timestamptz, '12 seconds'::interval,
+'{"reason":"because"}'::json, '{"when":"now"}'::jsonb, '$.a[*] ? (@ > 2)'::jsonpath,
+'127.0.0.1'::inet, '127.0.0.0/8'::cidr, '00:01:03:86:1c:ba'::macaddr8, '00:01:03:86:1c:ba'::macaddr,
+2::int2, 4::int4, 8::int8, 4::float4, '8'::float8, pi()::numeric,
+'foo'::"char", 'c'::bpchar, 'abc'::varchar, 'name'::name, 'txt'::text, true::bool,
+E'\\xDEADBEEF'::bytea, B'10001'::bit, B'10001'::varbit AS varbit, '12.34'::money,
+'abc'::refcursor,
+'1 2'::int2vector, '1 2'::oidvector, format('%s=UC/%s', USER, USER)::aclitem,
+'a fat cat sat on a mat and ate a fat rat'::tsvector, 'fat & rat'::tsquery,
+'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid, '11'::xid8,
+'pg_class'::regclass, 'regtype'::regtype type, 'pg_monitor'::regrole,
+'pg_class'::regclass::oid, '(1,1)'::tid, '2'::xid, '3'::cid,
+'10:20:10,14,15'::txid_snapshot, '10:20:10,14,15'::pg_snapshot, '16/B374D848'::pg_lsn,
+1::information_schema.cardinal_number,
+'l'::information_schema.character_data,
+'n'::information_schema.sql_identifier,
+'now'::information_schema.time_stamp,
+'YES'::information_schema.yes_or_no,
+'venus'::planets, 'i16'::insenum,
+'(1,2)'::int4range, '{(1,2)}'::int4multirange,
+'(3,4)'::int8range, '{(3,4)}'::int8multirange,
+'(1,2)'::float8range, '{(1,2)}'::float8multirange,
+'(3,4)'::numrange, '{(3,4)}'::nummultirange,
+'(a,b)'::textrange, '{(a,b)}'::textmultirange,
+'(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+'(2020-01-02, 2021-02-03)'::daterange,
+'{(2020-01-02, 2021-02-03)}'::datemultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+'(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+arrayrange(ARRAY[1,2], ARRAY[2,1]),
+arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+
+-- And now a test on the previous test, checking that all core types are
+-- included in this table
+-- XXX or some other non-catalog table processed by pg_upgrade
+SELECT oid, typname, typtype, typelem, typarray, typarray FROM pg_type t
+WHERE typtype NOT IN ('p', 'c')
+-- reg* which cannot be pg_upgraded
+AND oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper', 'regoperator', 'regconfig', 'regdictionary', 'regnamespace', 'regcollation']::regtype[])
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree', 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list', 'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])
+AND NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray=t.oid) -- exclude arrays
+AND NOT EXISTS (SELECT 1 FROM pg_attribute a WHERE a.atttypid=t.oid AND a.attnum>0 AND a.attrelid='manytypes'::regclass);
--
2.17.0
v6-0002-Move-pg_upgrade-kludges-to-sql-script.patchtext/x-diff; charset=us-asciiDownload
From 5ab4b974464f9732d5c9362f3b92cd33653864b3 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 6 Mar 2021 18:35:26 -0600
Subject: [PATCH v6 2/2] Move pg_upgrade kludges to sql script
NOTE, "IF EXISTS" isn't necessary in fa66b6dee
---
src/bin/pg_upgrade/test-upgrade.sql | 51 +++++++++++++++++++++++++++++
src/bin/pg_upgrade/test.sh | 48 +--------------------------
2 files changed, 52 insertions(+), 47 deletions(-)
create mode 100644 src/bin/pg_upgrade/test-upgrade.sql
diff --git a/src/bin/pg_upgrade/test-upgrade.sql b/src/bin/pg_upgrade/test-upgrade.sql
new file mode 100644
index 0000000000..74fad312cb
--- /dev/null
+++ b/src/bin/pg_upgrade/test-upgrade.sql
@@ -0,0 +1,51 @@
+-- This file has a bunch of kludges needed for testing upgrades across major versions
+-- It supports testing the most recent version of an old release (not any arbitrary minor version).
+
+SELECT
+ ver <= 804 AS oldpgversion_le84,
+ ver < 1000 AS oldpgversion_lt10,
+ ver < 1200 AS oldpgversion_lt12,
+ ver < 1400 AS oldpgversion_lt14
+ FROM (SELECT current_setting('server_version_num')::int/100 AS ver) AS v;
+\gset
+
+\if :oldpgversion_le84
+DROP FUNCTION public.myfunc(integer);
+\endif
+
+\if :oldpgversion_lt10
+-- last in 9.6 -- commit 5ded4bd21
+DROP FUNCTION public.oldstyle_length(integer, text);
+\endif
+
+\if :oldpgversion_lt14
+-- last in v13 commit 7ca37fb04
+DROP FUNCTION IF EXISTS public.putenv(text);
+-- last in v13 commit 76f412ab3
+-- public.!=- This one is only needed for v11+ ??
+-- Note, until v10, operators could only be dropped one at a time
+DROP OPERATOR public.#@# (pg_catalog.int8, NONE);
+DROP OPERATOR public.#%# (pg_catalog.int8, NONE);
+DROP OPERATOR public.!=- (pg_catalog.int8, NONE);
+DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);
+\endif
+
+\if :oldpgversion_lt12
+-- WITH OIDS is not supported anymore in v12, so remove support
+-- for any relations marked as such.
+DO $stmt$
+ DECLARE
+ rec text;
+ BEGIN
+ FOR rec in
+ SELECT oid::regclass::text
+ FROM pg_class
+ WHERE relname !~ '^pg_'
+ AND relhasoids
+ AND relkind in ('r','m')
+ ORDER BY 1
+ LOOP
+ execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
+ END LOOP;
+ END; $stmt$;
+\endif
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 8593488907..46a1ebb4ab 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -181,53 +181,7 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
# Before dumping, tweak the database of the old instance depending
# on its version.
if [ "$newsrc" != "$oldsrc" ]; then
- fix_sql=""
- # Get rid of objects not feasible in later versions
- case $oldpgversion in
- 804??)
- fix_sql="DROP FUNCTION public.myfunc(integer);"
- ;;
- esac
-
- # Last appeared in v9.6
- if [ $oldpgversion -lt 100000 ]; then
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text);"
- fi
- # Last appeared in v13
- if [ $oldpgversion -lt 140000 ]; then
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.putenv(text); -- last in v13
- DROP OPERATOR IF EXISTS -- last in v13
- public.#@# (pg_catalog.int8, NONE),
- public.#%# (pg_catalog.int8, NONE),
- public.!=- (pg_catalog.int8, NONE),
- public.#@%# (pg_catalog.int8, NONE);"
- fi
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- # WITH OIDS is not supported anymore in v12, so remove support
- # for any relations marked as such.
- if [ $oldpgversion -lt 120000 ]; then
- fix_sql="DO \$stmt\$
- DECLARE
- rec text;
- BEGIN
- FOR rec in
- SELECT oid::regclass::text
- FROM pg_class
- WHERE relname !~ '^pg_'
- AND relhasoids
- AND relkind in ('r','m')
- ORDER BY 1
- LOOP
- execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
- END LOOP;
- END; \$stmt\$;"
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
- fi
+ psql -X -d regression -f "test-upgrade.sql" || psql_fix_sql_status=$?
# Handling of --extra-float-digits gets messy after v12.
# Note that this changes the dumps from the old and new
--
2.17.0
On Sun, Nov 07, 2021 at 01:22:00PM -0600, Justin Pryzby wrote:
That may be good enough for test.sh, but if the kludges were moved to a .sql
script which was also run by the buildfarm (in stead of its hardcoded kludges), then
it might be necessary to handle the additional stuff my patch did, like:
[...]
Or, maybe it's guaranteed that the animals all run latest version of old
branches, in which case I think some of the BF's existing logic could be
dropped, which would help to reconcile these two scripts:
I am pretty sure that it is safe to assume that all buildfarm animals
run the top of the stable branch they are testing, at least on the
community side. An advantage of moving all those SQLs to a script
that can be process with psql thanks to the \if metacommands you have
added is that buildfarm clients are not required to immediately update
their code to work properly. Considering as well that we should
minimize the amount of duplication between all those things, I'd like
to think that we'd better apply 0002 and consider a backpatch to allow
the buildfarm to catch up on it. It should at least allow to remove a
good chunk of the object cleanup done directly by the buildfarm.
This is for the basics in terms of fixing test.sh and what should be
backpatched. In this aspect patches 0001 and 0002 were a bit
incorrect. I am not sure that 0003 is that interesting as designed as
we would miss any new core types introduced.We wouldn't miss new core types, because of the 2nd part of type_sanity which
tests that each core type was included in the "manytypes" table.
+-- XML might be disabled at compile-time
+AND oid != ALL(ARRAY['xml', 'gtsvector', 'pg_node_tree',
'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list',
'pg_brin_bloom_summary', 'pg_brin_minmax_multi_summary']::regtype[])
I believe that this comment is incomplete, applying only to the first
element listed in this array. I guess that this had better document
why those catalogs are part of the list? Good to see that adding a
reg* in core would immediately be noticed though, as far as I
understand this SQL.
--
Michael
On Sun, Nov 07, 2021 at 01:22:00PM -0600, Justin Pryzby wrote:
That may be good enough for test.sh, but if the kludges were moved to a .sql
script which was also run by the buildfarm (in stead of its hardcoded kludges), then
it might be necessary to handle the additional stuff my patch did, like:+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;" + DROP FUNCTION boxarea(box);" + DROP FUNCTION funny_dup17();"
These apply for an old version <= v10.
+ DROP TABLE abstime_tbl;" + DROP TABLE reltime_tbl;" + DROP TABLE tinterval_tbl;"
old version <= 9.3.
+ DROP AGGREGATE first_el_agg_any(anyelement);"
Not sure about this one.
+ DROP AGGREGATE array_cat_accum(anyarray);" + DROP OPERATOR @#@(NONE,bigint);"
These are on 9.4. It is worth noting that TestUpgradeXversion.pm
recreates those objects. I'd agree to close the gap completely rather
than just moving what test.sh does to wipe out a maximum client code
for the buildfarm.
Or, maybe it's guaranteed that the animals all run latest version of old
branches, in which case I think some of the BF's existing logic could be
dropped, which would help to reconcile these two scripts:
That seems like a worthy goal to reduce the amount of duplication with
the buildfarm code, while allowing tests from upgrades with older
versions (the WAL segment size and group permission issue in test.sh
had better be addressed in a better way, perhaps once the pg_upgrade
tests are moved to TAP). There are also things specific to contrib/
modules with older versions, but that may be too specific for this
exercise.
+\if :oldpgversion_le84
+DROP FUNCTION public.myfunc(integer);
+\endif
The oldest version tested by the buildfarm is 9.2, so we could ignore
this part I guess?
Andrew, what do you think about this part? Based on my read of this
thread, there is an agreement that this approach makes the buildfarm
code more manageable so as committers would not need to patch the
buildfarm code if their test fail. I agree with this conclusion, but
I wanted to double-check with you first. This would need a backpatch
down to 10 so as we could clean up a maximum of code in
TestUpgradeXversion.pm without waiting for an extra 5 years. Please
note that I am fine to send a patch for the buildfarm client.
We wouldn't miss new core types, because of the 2nd part of type_sanity which
tests that each core type was included in the "manytypes" table.
Thanks, I see your point now after a closer read.
There is still a pending question for contrib modules, but I think
that we need to think larger here with a better integration of
contrib/ modules in the upgrade testing process. Making that cheap
would require running the set of regression tests on the instance
to-be-upgraded first. I think that one step in this direction would
be to have unique databases for each contrib/ modules, so as there is
no overlap with objects dropped?
Having some checks with code types looks fine as a first step, so
let's do that. I have reviewed 0001, rewrote a couple of comments.
All the comments from upthread seem to be covered with that. So I'd
like to get that applied on HEAD. We could as well be less
conservative and backpatch that down to 12 to follow on 7c15cef so we
would be more careful with 15~ already (a backpatch down to 14 would
be enough for this purpose, actually thanks to the 14->15 upgrade
path).
--
Michael
Attachments:
v7-0001-pg_upgrade-test-to-exercise-binary-compatibility.patchtext/x-diff; charset=us-asciiDownload
From c4d766f9a461dad2d51cba3cb8d7d0c523267716 Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Wed, 17 Nov 2021 15:57:04 +0900
Subject: [PATCH v7] pg_upgrade: test to exercise binary compatibility
Creating a table with columns of many different datatypes to notice if the
binary format is accidentally changed again, as happened at:
7c15cef86 Base information_schema.sql_identifier domain on name, not varchar.
I checked that if I cherry-pick to v11, and comment out
old_11_check_for_sql_identifier_data_type_usage(), then pg_upgrade/test.sh
detects the original problem:
pg_dump: error: Error message from server: ERROR: invalid memory alloc request size 18446744073709551613
I understand the buildfarm has its own cross-version-upgrade test, which I
think would catch this on its own.
---
src/test/regress/expected/sanity_check.out | 1 +
src/test/regress/expected/type_sanity.out | 102 +++++++++++++++++++++
src/test/regress/sql/type_sanity.sql | 100 ++++++++++++++++++++
3 files changed, 203 insertions(+)
diff --git a/src/test/regress/expected/sanity_check.out b/src/test/regress/expected/sanity_check.out
index d04dc66db9..63706a28cc 100644
--- a/src/test/regress/expected/sanity_check.out
+++ b/src/test/regress/expected/sanity_check.out
@@ -185,6 +185,7 @@ sql_parts|f
sql_sizing|f
stud_emp|f
student|f
+tab_core_types|f
tableam_parted_a_heap2|f
tableam_parted_b_heap2|f
tableam_parted_c_heap2|f
diff --git a/src/test/regress/expected/type_sanity.out b/src/test/regress/expected/type_sanity.out
index f567fd378e..3ffd9d0d71 100644
--- a/src/test/regress/expected/type_sanity.out
+++ b/src/test/regress/expected/type_sanity.out
@@ -674,3 +674,105 @@ WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
----------+------------+---------------
(0 rows)
+-- Create a table that holds all the known in-core data types and leave it
+-- around so as pg_upgrade is able to test their binary compatibility.
+CREATE TABLE tab_core_types AS SELECT
+ '(11,12)'::point,
+ '(1,1),(2,2)'::line,
+ '((11,11),(12,12))'::lseg,
+ '((11,11),(13,13))'::box,
+ '((11,12),(13,13),(14,14))'::path AS openedpath,
+ '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+ '((11,12),(13,13),(14,14))'::polygon,
+ '1,1,1'::circle,
+ 'today'::date,
+ 'now'::time,
+ 'now'::timestamp,
+ 'now'::timetz,
+ 'now'::timestamptz,
+ '12 seconds'::interval,
+ '{"reason":"because"}'::json,
+ '{"when":"now"}'::jsonb,
+ '$.a[*] ? (@ > 2)'::jsonpath,
+ '127.0.0.1'::inet,
+ '127.0.0.0/8'::cidr,
+ '00:01:03:86:1c:ba'::macaddr8,
+ '00:01:03:86:1c:ba'::macaddr,
+ 2::int2, 4::int4, 8::int8,
+ 4::float4, '8'::float8, pi()::numeric,
+ 'foo'::"char",
+ 'c'::bpchar,
+ 'abc'::varchar,
+ 'name'::name,
+ 'txt'::text,
+ true::bool,
+ E'\\xDEADBEEF'::bytea,
+ B'10001'::bit,
+ B'10001'::varbit AS varbit,
+ '12.34'::money,
+ 'abc'::refcursor,
+ '1 2'::int2vector,
+ '1 2'::oidvector,
+ format('%s=UC/%s', USER, USER)::aclitem,
+ 'a fat cat sat on a mat and ate a fat rat'::tsvector,
+ 'fat & rat'::tsquery,
+ 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid,
+ '11'::xid8,
+ 'pg_class'::regclass,
+ 'regtype'::regtype type,
+ 'pg_monitor'::regrole,
+ 'pg_class'::regclass::oid,
+ '(1,1)'::tid, '2'::xid, '3'::cid,
+ '10:20:10,14,15'::txid_snapshot,
+ '10:20:10,14,15'::pg_snapshot,
+ '16/B374D848'::pg_lsn,
+ 1::information_schema.cardinal_number,
+ 'l'::information_schema.character_data,
+ 'n'::information_schema.sql_identifier,
+ 'now'::information_schema.time_stamp,
+ 'YES'::information_schema.yes_or_no,
+ 'venus'::planets,
+ 'i16'::insenum,
+ '(1,2)'::int4range, '{(1,2)}'::int4multirange,
+ '(3,4)'::int8range, '{(3,4)}'::int8multirange,
+ '(1,2)'::float8range, '{(1,2)}'::float8multirange,
+ '(3,4)'::numrange, '{(3,4)}'::nummultirange,
+ '(a,b)'::textrange, '{(a,b)}'::textmultirange,
+ '(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+ '(2020-01-02, 2021-02-03)'::daterange,
+ '{(2020-01-02, 2021-02-03)}'::datemultirange,
+ '(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+ '{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+ '(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+ '{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+ arrayrange(ARRAY[1,2], ARRAY[2,1]),
+ arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+-- Sanity check on the previous table, checking that all core types are
+-- included in this table.
+SELECT oid, typname, typtype, typelem, typarray, typarray
+ FROM pg_type t
+ WHERE typtype NOT IN ('p', 'c') AND
+ -- reg* types cannot be pg_upgraded, so discard them.
+ oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper',
+ 'regoperator', 'regconfig', 'regdictionary',
+ 'regnamespace', 'regcollation']::regtype[]) AND
+ -- Discard types that do not accept input values as these cannot be
+ -- tested easily.
+ -- Note: XML might be disabled at compile-time.
+ oid != ALL(ARRAY['gtsvector', 'pg_node_tree',
+ 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list',
+ 'pg_brin_bloom_summary',
+ 'pg_brin_minmax_multi_summary', 'xml']::regtype[]) AND
+ -- Discard arrays.
+ NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray = t.oid)
+ -- Exclude everything from the table created above. This checks
+ -- that no in-core types are missing in tab_core_types.
+ AND NOT EXISTS (SELECT 1
+ FROM pg_attribute a
+ WHERE a.atttypid=t.oid AND
+ a.attnum > 0 AND
+ a.attrelid='tab_core_types'::regclass);
+ oid | typname | typtype | typelem | typarray | typarray
+-----+---------+---------+---------+----------+----------
+(0 rows)
+
diff --git a/src/test/regress/sql/type_sanity.sql b/src/test/regress/sql/type_sanity.sql
index 404c3a2043..f92773b75e 100644
--- a/src/test/regress/sql/type_sanity.sql
+++ b/src/test/regress/sql/type_sanity.sql
@@ -495,3 +495,103 @@ WHERE pronargs != 2
SELECT p1.rngtypid, p1.rngsubtype, p1.rngmultitypid
FROM pg_range p1
WHERE p1.rngmultitypid IS NULL OR p1.rngmultitypid = 0;
+
+-- Create a table that holds all the known in-core data types and leave it
+-- around so as pg_upgrade is able to test their binary compatibility.
+CREATE TABLE tab_core_types AS SELECT
+ '(11,12)'::point,
+ '(1,1),(2,2)'::line,
+ '((11,11),(12,12))'::lseg,
+ '((11,11),(13,13))'::box,
+ '((11,12),(13,13),(14,14))'::path AS openedpath,
+ '[(11,12),(13,13),(14,14)]'::path AS closedpath,
+ '((11,12),(13,13),(14,14))'::polygon,
+ '1,1,1'::circle,
+ 'today'::date,
+ 'now'::time,
+ 'now'::timestamp,
+ 'now'::timetz,
+ 'now'::timestamptz,
+ '12 seconds'::interval,
+ '{"reason":"because"}'::json,
+ '{"when":"now"}'::jsonb,
+ '$.a[*] ? (@ > 2)'::jsonpath,
+ '127.0.0.1'::inet,
+ '127.0.0.0/8'::cidr,
+ '00:01:03:86:1c:ba'::macaddr8,
+ '00:01:03:86:1c:ba'::macaddr,
+ 2::int2, 4::int4, 8::int8,
+ 4::float4, '8'::float8, pi()::numeric,
+ 'foo'::"char",
+ 'c'::bpchar,
+ 'abc'::varchar,
+ 'name'::name,
+ 'txt'::text,
+ true::bool,
+ E'\\xDEADBEEF'::bytea,
+ B'10001'::bit,
+ B'10001'::varbit AS varbit,
+ '12.34'::money,
+ 'abc'::refcursor,
+ '1 2'::int2vector,
+ '1 2'::oidvector,
+ format('%s=UC/%s', USER, USER)::aclitem,
+ 'a fat cat sat on a mat and ate a fat rat'::tsvector,
+ 'fat & rat'::tsquery,
+ 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'::uuid,
+ '11'::xid8,
+ 'pg_class'::regclass,
+ 'regtype'::regtype type,
+ 'pg_monitor'::regrole,
+ 'pg_class'::regclass::oid,
+ '(1,1)'::tid, '2'::xid, '3'::cid,
+ '10:20:10,14,15'::txid_snapshot,
+ '10:20:10,14,15'::pg_snapshot,
+ '16/B374D848'::pg_lsn,
+ 1::information_schema.cardinal_number,
+ 'l'::information_schema.character_data,
+ 'n'::information_schema.sql_identifier,
+ 'now'::information_schema.time_stamp,
+ 'YES'::information_schema.yes_or_no,
+ 'venus'::planets,
+ 'i16'::insenum,
+ '(1,2)'::int4range, '{(1,2)}'::int4multirange,
+ '(3,4)'::int8range, '{(3,4)}'::int8multirange,
+ '(1,2)'::float8range, '{(1,2)}'::float8multirange,
+ '(3,4)'::numrange, '{(3,4)}'::nummultirange,
+ '(a,b)'::textrange, '{(a,b)}'::textmultirange,
+ '(12.34, 56.78)'::cashrange, '{(12.34, 56.78)}'::cashmultirange,
+ '(2020-01-02, 2021-02-03)'::daterange,
+ '{(2020-01-02, 2021-02-03)}'::datemultirange,
+ '(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tsrange,
+ '{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tsmultirange,
+ '(2020-01-02 03:04:05, 2021-02-03 06:07:08)'::tstzrange,
+ '{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
+ arrayrange(ARRAY[1,2], ARRAY[2,1]),
+ arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+
+-- Sanity check on the previous table, checking that all core types are
+-- included in this table.
+SELECT oid, typname, typtype, typelem, typarray, typarray
+ FROM pg_type t
+ WHERE typtype NOT IN ('p', 'c') AND
+ -- reg* types cannot be pg_upgraded, so discard them.
+ oid != ALL(ARRAY['regproc', 'regprocedure', 'regoper',
+ 'regoperator', 'regconfig', 'regdictionary',
+ 'regnamespace', 'regcollation']::regtype[]) AND
+ -- Discard types that do not accept input values as these cannot be
+ -- tested easily.
+ -- Note: XML might be disabled at compile-time.
+ oid != ALL(ARRAY['gtsvector', 'pg_node_tree',
+ 'pg_ndistinct', 'pg_dependencies', 'pg_mcv_list',
+ 'pg_brin_bloom_summary',
+ 'pg_brin_minmax_multi_summary', 'xml']::regtype[]) AND
+ -- Discard arrays.
+ NOT EXISTS (SELECT 1 FROM pg_type u WHERE u.typarray = t.oid)
+ -- Exclude everything from the table created above. This checks
+ -- that no in-core types are missing in tab_core_types.
+ AND NOT EXISTS (SELECT 1
+ FROM pg_attribute a
+ WHERE a.atttypid=t.oid AND
+ a.attnum > 0 AND
+ a.attrelid='tab_core_types'::regclass);
--
2.33.1
On 11/17/21 02:01, Michael Paquier wrote:
The oldest version tested by the buildfarm is 9.2, so we could ignore
this part I guess?Andrew, what do you think about this part? Based on my read of this
thread, there is an agreement that this approach makes the buildfarm
code more manageable so as committers would not need to patch the
buildfarm code if their test fail. I agree with this conclusion, but
I wanted to double-check with you first. This would need a backpatch
down to 10 so as we could clean up a maximum of code in
TestUpgradeXversion.pm without waiting for an extra 5 years. Please
note that I am fine to send a patch for the buildfarm client.
In general I'm in agreement with the direction here. If we can have a
script that applies to back branches to make them suitable for upgrade
testing instead of embedding this in the buildfarm client, so much the
better.
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
On Wed, Nov 17, 2021 at 10:07:17AM -0500, Andrew Dunstan wrote:
In general I'm in agreement with the direction here. If we can have a
script that applies to back branches to make them suitable for upgrade
testing instead of embedding this in the buildfarm client, so much the
better.
Okay. I have worked on 0001 to add the table to check after the
binary compatibilities and applied it. What remains on this thread is
0002 to move all the SQL queries into a psql-able file with the set of
\if clauses to control which query is run depending on the backend
version. Justin, could you send a rebased version of that with all
the changes from the buildfarm client included?
--
Michael
On Wed, Nov 17, 2021 at 04:01:19PM +0900, Michael Paquier wrote:
On Sun, Nov 07, 2021 at 01:22:00PM -0600, Justin Pryzby wrote:
That may be good enough for test.sh, but if the kludges were moved to a .sql
script which was also run by the buildfarm (in stead of its hardcoded kludges), then
it might be necessary to handle the additional stuff my patch did, like:+ DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;" + DROP FUNCTION boxarea(box);" + DROP FUNCTION funny_dup17();"These apply for an old version <= v10.
+ DROP TABLE abstime_tbl;" + DROP TABLE reltime_tbl;" + DROP TABLE tinterval_tbl;"old version <= 9.3.
+ DROP AGGREGATE first_el_agg_any(anyelement);"
Not sure about this one.
See 97f73a978fc1aca59c6ad765548ce0096d95a923
These are on 9.4. It is worth noting that TestUpgradeXversion.pm
recreates those objects. I'd agree to close the gap completely rather
than just moving what test.sh does to wipe out a maximum client code
for the buildfarm.
Or, maybe it's guaranteed that the animals all run latest version of old
branches, in which case I think some of the BF's existing logic could be
dropped, which would help to reconcile these two scripts:my $missing_funcs = q{drop function if exists public.boxarea(box);
drop function if exists public.funny_dup17();
..
$prstmt = join(';',
'drop operator @#@ (NONE, bigint)',
..
'drop aggregate if exists public.array_cat_accum(anyarray)',
I'm not sure if everything the buildfarm does is needed anymore, or if any of
it could be removed now, rather than being implemented in test.sh.
boxarea, funny_dup - see also db3af9feb19f39827e916145f88fa5eca3130cb2
https://github.com/PGBuildFarm/client-code/commit/9ca42ac1783a8cf99c73b4f7c52bd05a6024669d
array_larger_accum/array_cat_accum - see also 97f73a978fc1aca59c6ad765548ce0096d95a923
https://github.com/PGBuildFarm/client-code/commit/a55c89869f30db894ab823df472e739cee2e8c91
@#@ 76f412ab310554acb970a0b73c8d1f37f35548c6 ??
https://github.com/PGBuildFarm/client-code/commit/b3fdb743d89dc91fcea47bd9651776c503f774ff
https://github.com/PGBuildFarm/client-code/commit/b44e9390e2d8d904ff8cabd906a2d4b5c8bd300a
https://github.com/PGBuildFarm/client-code/commit/3844503c8fde134f7cc29b3fb147d590b6d2fcc1
abstime:
https://github.com/PGBuildFarm/client-code/commit/f027d991d197036028ffa9070f4c9193076ed5ed
putenv
https://github.com/PGBuildFarm/client-code/commit/fa86d0b1bc7a8d7b9f15b1da8b8e43f4d3a08e2b
Attachments:
v7-0001-Move-pg_upgrade-kludges-to-sql-script.patchtext/x-diff; charset=us-asciiDownload
From 2e420708f28574e73a86f8eba185d89a52d46509 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Sat, 6 Mar 2021 18:35:26 -0600
Subject: [PATCH v7 1/2] Move pg_upgrade kludges to sql script
NOTE, "IF EXISTS" isn't necessary in fa66b6dee
---
src/bin/pg_upgrade/test-upgrade.sql | 52 +++++++++++++++++++++++++++++
src/bin/pg_upgrade/test.sh | 48 +-------------------------
2 files changed, 53 insertions(+), 47 deletions(-)
create mode 100644 src/bin/pg_upgrade/test-upgrade.sql
diff --git a/src/bin/pg_upgrade/test-upgrade.sql b/src/bin/pg_upgrade/test-upgrade.sql
new file mode 100644
index 0000000000..5d74232c2b
--- /dev/null
+++ b/src/bin/pg_upgrade/test-upgrade.sql
@@ -0,0 +1,52 @@
+-- This file has a bunch of kludges needed for testing upgrades across major versions
+-- It supports testing the most recent version of an old release (not any arbitrary minor version).
+
+SELECT
+ ver <= 804 AS oldpgversion_le84,
+ ver < 1000 AS oldpgversion_lt10,
+ ver < 1200 AS oldpgversion_lt12,
+ ver < 1400 AS oldpgversion_lt14
+ FROM (SELECT current_setting('server_version_num')::int/100 AS ver) AS v;
+\gset
+
+\if :oldpgversion_le84
+DROP FUNCTION public.myfunc(integer);
+\endif
+
+\if :oldpgversion_lt10
+-- last in 9.6 -- commit 5ded4bd21
+DROP FUNCTION public.oldstyle_length(integer, text);
+\endif
+
+\if :oldpgversion_lt14
+-- last in v13 commit 7ca37fb04
+DROP FUNCTION IF EXISTS public.putenv(text);
+
+-- last in v13 commit 76f412ab3
+-- public.!=- This one is only needed for v11+ ??
+-- Note, until v10, operators could only be dropped one at a time
+DROP OPERATOR public.#@# (pg_catalog.int8, NONE);
+DROP OPERATOR public.#%# (pg_catalog.int8, NONE);
+DROP OPERATOR public.!=- (pg_catalog.int8, NONE);
+DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);
+\endif
+
+\if :oldpgversion_lt12
+-- WITH OIDS is not supported anymore in v12, so remove support
+-- for any relations marked as such.
+DO $stmt$
+ DECLARE
+ rec text;
+ BEGIN
+ FOR rec in
+ SELECT oid::regclass::text
+ FROM pg_class
+ WHERE relname !~ '^pg_'
+ AND relhasoids
+ AND relkind in ('r','m')
+ ORDER BY 1
+ LOOP
+ execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
+ END LOOP;
+ END; $stmt$;
+\endif
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 8593488907..46a1ebb4ab 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -181,53 +181,7 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
# Before dumping, tweak the database of the old instance depending
# on its version.
if [ "$newsrc" != "$oldsrc" ]; then
- fix_sql=""
- # Get rid of objects not feasible in later versions
- case $oldpgversion in
- 804??)
- fix_sql="DROP FUNCTION public.myfunc(integer);"
- ;;
- esac
-
- # Last appeared in v9.6
- if [ $oldpgversion -lt 100000 ]; then
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text);"
- fi
- # Last appeared in v13
- if [ $oldpgversion -lt 140000 ]; then
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.putenv(text); -- last in v13
- DROP OPERATOR IF EXISTS -- last in v13
- public.#@# (pg_catalog.int8, NONE),
- public.#%# (pg_catalog.int8, NONE),
- public.!=- (pg_catalog.int8, NONE),
- public.#@%# (pg_catalog.int8, NONE);"
- fi
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- # WITH OIDS is not supported anymore in v12, so remove support
- # for any relations marked as such.
- if [ $oldpgversion -lt 120000 ]; then
- fix_sql="DO \$stmt\$
- DECLARE
- rec text;
- BEGIN
- FOR rec in
- SELECT oid::regclass::text
- FROM pg_class
- WHERE relname !~ '^pg_'
- AND relhasoids
- AND relkind in ('r','m')
- ORDER BY 1
- LOOP
- execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
- END LOOP;
- END; \$stmt\$;"
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
- fi
+ psql -X -d regression -f "test-upgrade.sql" || psql_fix_sql_status=$?
# Handling of --extra-float-digits gets messy after v12.
# Note that this changes the dumps from the old and new
--
2.17.0
v7-0002-wip-support-pg_upgrade-from-older-versions.patchtext/x-diff; charset=us-asciiDownload
From 79242aed5b38f7498a089c6cd972ca24f8e357d1 Mon Sep 17 00:00:00 2001
From: Justin Pryzby <pryzbyj@telsasoft.com>
Date: Wed, 17 Nov 2021 11:27:20 -0600
Subject: [PATCH v7 2/2] wip: support pg_upgrade from older versions
---
src/bin/pg_upgrade/test-upgrade.sql | 36 +++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
diff --git a/src/bin/pg_upgrade/test-upgrade.sql b/src/bin/pg_upgrade/test-upgrade.sql
index 5d74232c2b..1d0a840ec4 100644
--- a/src/bin/pg_upgrade/test-upgrade.sql
+++ b/src/bin/pg_upgrade/test-upgrade.sql
@@ -3,6 +3,9 @@
SELECT
ver <= 804 AS oldpgversion_le84,
+ ver >= 905 AND ver <= 1300 AS oldpgversion_95_13,
+ ver >= 906 AND ver <= 1300 AS oldpgversion_96_13,
+ ver >= 906 AND ver <= 1000 AS oldpgversion_96_10,
ver < 1000 AS oldpgversion_lt10,
ver < 1200 AS oldpgversion_lt12,
ver < 1400 AS oldpgversion_lt14
@@ -31,9 +34,42 @@ DROP OPERATOR public.!=- (pg_catalog.int8, NONE);
DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);
\endif
+\if :oldpgversion_ge10
+-- commit 068503c76511cdb0080bab689662a20e86b9c845
+DROP TRANSFORM FOR integer LANGUAGE sql CASCADE;
+\endif
+
+\if :oldpgversion_96_10
+-- commit db3af9feb19f39827e916145f88fa5eca3130cb2
+DROP FUNCTION boxarea(box);
+DROP FUNCTION funny_dup17();
+
+-- commit cda6a8d01d391eab45c4b3e0043a1b2b31072f5f
+DROP TABLE abstime_tbl;
+DROP TABLE reltime_tbl;
+DROP TABLE tinterval_tbl;
+\endif
+
+\if :oldpgversion_96_13
+-- Various things removed for v14
+-- commit 9e38c2bb5 and 97f73a978
+DROP AGGREGATE first_el_agg_any(anyelement);
+\endif
+
+\if :oldpgversion_95_13
+-- commit 9e38c2bb5 and 97f73a978
+-- DROP AGGREGATE array_larger_accum(anyarray);
+DROP AGGREGATE array_cat_accum(anyarray);
+
+-- commit 76f412ab3
+-- DROP OPERATOR @#@(bigint,NONE);
+DROP OPERATOR @#@(NONE,bigint);
+\endif
+
\if :oldpgversion_lt12
-- WITH OIDS is not supported anymore in v12, so remove support
-- for any relations marked as such.
+-- commit 578b22971: OIDS removed in v12
DO $stmt$
DECLARE
rec text;
--
2.17.0
Michael Paquier <michael@paquier.xyz> writes:
Okay. I have worked on 0001 to add the table to check after the
binary compatibilities and applied it.
Something funny about that on prion:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-11-18%2001%3A55%3A38
@@ -747,6 +747,8 @@
'{(2020-01-02 03:04:05, 2021-02-03 06:07:08)}'::tstzmultirange,
arrayrange(ARRAY[1,2], ARRAY[2,1]),
arraymultirange(arrayrange(ARRAY[1,2], ARRAY[2,1]));
+ERROR: unrecognized key word: "ec2"
+HINT: ACL key word must be "group" or "user".
-- Sanity check on the previous table, checking that all core types are
-- included in this table.
SELECT oid, typname, typtype, typelem, typarray, typarray
Not sure what's going on there.
regards, tom lane
On Wed, Nov 17, 2021 at 11:57:51PM -0500, Tom Lane wrote:
Something funny about that on prion:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prion&dt=2021-11-18%2001%3A55%3A38
Not sure what's going on there.
Yes, that was just some missing quoting in the aclitem of this new
table. prion uses a specific user name, "ec2-user", that caused the
failure.
--
Michael
On Wed, Nov 17, 2021 at 10:47:28PM -0600, Justin Pryzby wrote:
I'm not sure if everything the buildfarm does is needed anymore, or if any of
it could be removed now, rather than being implemented in test.sh.
+-- This file has a bunch of kludges needed for testing upgrades
across major versions
+-- It supports testing the most recent version of an old release (not
any arbitrary minor version).
This could be better-worded. Here is an idea:
--
-- SQL queries for major upgrade tests
--
-- This file includes a set of SQL queries to make a cluster to-be-upgraded
-- compatible with the version this file is on. This requires psql,
-- as per-version queries are controlled with a set of \if clauses.
+\if :oldpgversion_le84
+DROP FUNCTION public.myfunc(integer);
+\endif
We could retire this part for <= 8.4. The oldest version tested by
the buildfarm is 9.2.
+ psql -X -d regression -f "test-upgrade.sql" || psql_fix_sql_status=$?
Shouldn't we use an absolute path here? I was testing a VPATH build
and that was not working properly.
+-- commit 9e38c2bb5 and 97f73a978
+-- DROP AGGREGATE array_larger_accum(anyarray);
+DROP AGGREGATE array_cat_accum(anyarray);
+
+-- commit 76f412ab3
+-- DROP OPERATOR @#@(bigint,NONE);
+DROP OPERATOR @#@(NONE,bigint);
+\endif
The buildfarm does "CREATE OPERATOR @#@" and "CREATE AGGREGATE
array_larger_accum" when dealing with an old version between 9.5 and
13. Shouldn't we do the same and create those objects rather than a
plain DROP? What you are doing is not wrong, and it should allow
upgrades to work, but that's a bit inconsistent with the buildfarm in
terms of coverage.
+ ver >= 905 AND ver <= 1300 AS oldpgversion_95_13,
+ ver >= 906 AND ver <= 1300 AS oldpgversion_96_13,
+ ver >= 906 AND ver <= 1000 AS oldpgversion_96_10,
So here, we have the choice between conditions that play with version
ranges or we could make those checks simpler but compensate with a set
of IF EXISTS queries. I think that your choice is right. The
buildfarm mixes both styles to compensate with the cases where the
objects are created after a drop.
The list of objects and the version ranges look correct to me.
--
Michael
On Thu, Nov 18, 2021 at 03:58:18PM +0900, Michael Paquier wrote:
+ ver >= 905 AND ver <= 1300 AS oldpgversion_95_13, + ver >= 906 AND ver <= 1300 AS oldpgversion_96_13, + ver >= 906 AND ver <= 1000 AS oldpgversion_96_10, So here, we have the choice between conditions that play with version ranges or we could make those checks simpler but compensate with a set of IF EXISTS queries. I think that your choice is right. The buildfarm mixes both styles to compensate with the cases where the objects are created after a drop.
So, I have come back to this part of the patch set, that moves the SQL
queries doing the pre-upgrade cleanups in the old version we upgrade
from, and decided to go with what looks like the simplest approach,
relying on some IFEs depending on the object types if they don't
exist for some cases.
While checking the whole thing, I have noticed that some of the
operations were not really necessary. The result is rather clean now,
with a linear organization of the version logic, so as it is a
no-brainer to get that done in back-branches per the
backward-compatibility argument.
I'll get that done down to 10 to maximize its influence, then I'll
move on with the buildfarm code and send a patch to plug this and
reduce the dependencies between core and the buildfarm code.
--
Michael
Attachments:
0001-Move-SQLs-for-cleanups-before-cross-version-upgrades.patchtext/x-diff; charset=us-asciiDownload
From 9bfe54d8867c9c05a36976f01ed65e5b8da442f7 Mon Sep 17 00:00:00 2001
From: Michael Paquier <michael@paquier.xyz>
Date: Wed, 1 Dec 2021 16:08:01 +0900
Subject: [PATCH] Move SQLs for cleanups before cross-version upgrades into a
new file
The plan is to make the buildfarm code re-use this code, and test.sh
held a duplicated logic for this work. Separating those SQLs into a new
file with a set of \if clauses to do version checks with the old version
upgrading will allow the buildfarm to reuse that. An extra
simplification is that committers will be able to control the objects
cleaned up without any need to tweak the buldfarm code, at least for the
main regression test suite.
Backpatch down to 10, to maximize its effects.
---
src/bin/pg_upgrade/test.sh | 52 ++--------------
src/bin/pg_upgrade/upgrade_adapt.sql | 92 ++++++++++++++++++++++++++++
2 files changed, 97 insertions(+), 47 deletions(-)
create mode 100644 src/bin/pg_upgrade/upgrade_adapt.sql
diff --git a/src/bin/pg_upgrade/test.sh b/src/bin/pg_upgrade/test.sh
index 8593488907..54c02bc65b 100644
--- a/src/bin/pg_upgrade/test.sh
+++ b/src/bin/pg_upgrade/test.sh
@@ -181,53 +181,11 @@ if "$MAKE" -C "$oldsrc" installcheck-parallel; then
# Before dumping, tweak the database of the old instance depending
# on its version.
if [ "$newsrc" != "$oldsrc" ]; then
- fix_sql=""
- # Get rid of objects not feasible in later versions
- case $oldpgversion in
- 804??)
- fix_sql="DROP FUNCTION public.myfunc(integer);"
- ;;
- esac
-
- # Last appeared in v9.6
- if [ $oldpgversion -lt 100000 ]; then
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.oldstyle_length(integer, text);"
- fi
- # Last appeared in v13
- if [ $oldpgversion -lt 140000 ]; then
- fix_sql="$fix_sql
- DROP FUNCTION IF EXISTS
- public.putenv(text); -- last in v13
- DROP OPERATOR IF EXISTS -- last in v13
- public.#@# (pg_catalog.int8, NONE),
- public.#%# (pg_catalog.int8, NONE),
- public.!=- (pg_catalog.int8, NONE),
- public.#@%# (pg_catalog.int8, NONE);"
- fi
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
-
- # WITH OIDS is not supported anymore in v12, so remove support
- # for any relations marked as such.
- if [ $oldpgversion -lt 120000 ]; then
- fix_sql="DO \$stmt\$
- DECLARE
- rec text;
- BEGIN
- FOR rec in
- SELECT oid::regclass::text
- FROM pg_class
- WHERE relname !~ '^pg_'
- AND relhasoids
- AND relkind in ('r','m')
- ORDER BY 1
- LOOP
- execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
- END LOOP;
- END; \$stmt\$;"
- psql -X -d regression -c "$fix_sql;" || psql_fix_sql_status=$?
- fi
+ # This SQL script has its own idea of the cleanup that needs to be
+ # done and embeds version checks. Note that this uses the script
+ # stored on the new branch.
+ psql -X -d regression -f "$newsrc/src/bin/pg_upgrade/upgrade_adapt.sql" \
+ || psql_fix_sql_status=$?
# Handling of --extra-float-digits gets messy after v12.
# Note that this changes the dumps from the old and new
diff --git a/src/bin/pg_upgrade/upgrade_adapt.sql b/src/bin/pg_upgrade/upgrade_adapt.sql
new file mode 100644
index 0000000000..175b2ebe2e
--- /dev/null
+++ b/src/bin/pg_upgrade/upgrade_adapt.sql
@@ -0,0 +1,92 @@
+--
+-- SQL queries for upgrade tests across different major versions.
+--
+-- This file includes a set of SQL queries to make a cluster to-be-upgraded
+-- compatible with the version this file is based on. Note that this
+-- requires psql, as per-version queries are controlled with a set of \if
+-- clauses.
+
+-- This script is backward-compatible, so it is able to work with any version
+-- newer than 9.2 we are upgrading from, up to the branch this script is stored
+-- on (even if this would not run if running pg_upgrade with the same version
+-- for the origin and the target).
+
+-- \if accepts a simple boolean value, so all the version checks are
+-- done based on this assumption.
+SELECT
+ ver <= 902 AS oldpgversion_le92,
+ ver <= 904 AS oldpgversion_le94,
+ ver <= 906 AS oldpgversion_le96,
+ ver <= 1000 AS oldpgversion_le10,
+ ver <= 1100 AS oldpgversion_le11,
+ ver <= 1300 AS oldpgversion_le13
+ FROM (SELECT current_setting('server_version_num')::int / 100 AS ver) AS v;
+\gset
+
+-- Objects last appearing in 9.2.
+\if :oldpgversion_le92
+-- Note that those tables are removed from the regression tests in 9.3
+-- and newer versions.
+DROP TABLE abstime_tbl;
+DROP TABLE reltime_tbl;
+DROP TABLE tinterval_tbl;
+\endif
+
+-- Objects last appearing in 9.4.
+\if :oldpgversion_le94
+-- This aggregate has been fixed in 9.5 and later versions, so drop
+-- and re-create it.
+DROP AGGREGATE array_cat_accum(anyarray);
+CREATE AGGREGATE array_larger_accum (anyarray) (
+ sfunc = array_larger,
+ stype = anyarray,
+ initcond = $${}$$);
+-- This operator has been fixed in 9.5 and later versions, so drop and
+-- re-create it.
+DROP OPERATOR @#@ (NONE, bigint);
+CREATE OPERATOR @#@ (PROCEDURE = factorial,
+ RIGHTARG = bigint);
+\endif
+
+-- Objects last appearing in 9.6.
+\if :oldpgversion_le96
+DROP FUNCTION public.oldstyle_length(integer, text);
+\endif
+
+-- Objects last appearing in 10.
+\if :oldpgversion_le10
+DROP FUNCTION IF EXISTS boxarea(box);
+DROP FUNCTION IF EXISTS funny_dup17();
+\endif
+
+-- Objects last appearing in 11.
+\if :oldpgversion_le11
+-- WITH OIDS is supported until v11, so remove its support for any
+-- relations marked as such.
+DO $stmt$
+ DECLARE
+ rec text;
+ BEGIN
+ FOR rec in
+ SELECT oid::regclass::text
+ FROM pg_class
+ WHERE relname !~ '^pg_'
+ AND relhasoids
+ AND relkind in ('r','m')
+ ORDER BY 1
+ LOOP
+ execute 'ALTER TABLE ' || rec || ' SET WITHOUT OIDS';
+ END LOOP;
+ END; $stmt$;
+\endif
+
+-- Objects last appearing in 13.
+\if :oldpgversion_le13
+DROP FUNCTION IF EXISTS public.putenv(text);
+-- Until v10, operators could only be dropped one at a time, so be careful
+-- to stick with one command for each drop here.
+DROP OPERATOR public.#@# (pg_catalog.int8, NONE);
+DROP OPERATOR public.#%# (pg_catalog.int8, NONE);
+DROP OPERATOR public.!=- (pg_catalog.int8, NONE);
+DROP OPERATOR public.#@%# (pg_catalog.int8, NONE);
+\endif
--
2.34.0
On Wed, Dec 01, 2021 at 04:19:44PM +0900, Michael Paquier wrote:
I'll get that done down to 10 to maximize its influence, then I'll
move on with the buildfarm code and send a patch to plug this and
reduce the dependencies between core and the buildfarm code.
Okay, I have checked this one this morning, and applied the split down
to 10, so as we have a way to fix objects from the main regression
test suite. The buildfarm client gets a bit cleaned up after that (I
have a patch for that, but I am not 100% sure that it is right).
Still, the global picture is larger than that because there is still
nothing done for contrib/ modules included in cross-version checks of
pg_upgrade by the buildfarm. The core code tests don't do this much,
but if we were to do the same things as the buildfarm, then we would
need to run installcheck-world (roughly) on a deployed instance, then
pg_upgrade it. That's not going to be cheap, for sure.
One thing that we could do is to use unique names for the databases of
the contrib/ modules when running an installcheck, so as these are
preserved for upgrades (the buildfarm client does that). This has as
effect to increase the number of databases for an instance
installcheck'ed, so this had better be optional, at least.
--
Michael