Semi-unable to add new records to table--primary key needed?
I have a table that seems to act for some purposes as if I can't add new records to it. I would like to understand why that is and fix it.
The initial problem was that an MS-Access application using an ODBC driver (driver and database 64 bit PG 12.0) failed at
DoCmd.GoToRecord , , acNewRec
which is basically saying to create a new record in the table. The error was something like unable to create record. Unfortunately DoCmd is a black box; I can only speculate what it is doing.
If I open the table in Access's default grid view, the controls to add records are likewise greyed out.
And if I open the table in pgAdmin, it likewise will not let me create a new record. So the problem is not Access-specific.
With other, similar tables, I can add new records.
And even with tblaliquot, I can add new records with sql; that's how I populated the table. Perhaps the problem is that I specified values for fields that are serial, and the counter still seems to be at 1.
The table also has lots of records relative to most other ones. It has about 72,000 records.
The only obvious difference between the tables is that the others had primary keys, and this one doesn't. Could that explain what was going on?
I had to abandon plans to rollout a conversion to Postgres because of this problem, and so I'd like to solve it so we can do the switch relatively soon.
Here's the SQL on the table from pgAdmin:
CREATE TABLE public.tblaliquot
(
aliquotid integer NOT NULL DEFAULT nextval('tblaliquot_aliquotid_seq'::regclass),
preparationid bigint,
datealiquotted date,
rnaaliquottype bigint,
sequencer character varying(4) COLLATE pg_catalog."default",
aliquotlabel character varying(40) COLLATE pg_catalog."default",
aliquotbarcode character varying(255) COLLATE pg_catalog."default",
rnaaliquotconcentration double precision,
originalvolume double precision,
numberdefrosts integer,
storagetype bigint,
locationfreezer bigint,
locationrow character varying(10) COLLATE pg_catalog."default",
locationrack character varying(10) COLLATE pg_catalog."default",
locationbox character varying(10) COLLATE pg_catalog."default",
locationplate character varying(255) COLLATE pg_catalog."default",
locationspace character varying(10) COLLATE pg_catalog."default",
locationother character varying(60) COLLATE pg_catalog."default",
aliquottedby bigint,
comments text COLLATE pg_catalog."default",
creationdate date,
createdby bigint
)
The original code that created the table used AliquotID serial for the first field.
The sequence mentioned in the first field of the definition shows Current Value of 1 in the properties tab in pgAdmin, despite the large number of records. But the sequences associated with other tables also are very low, like 3 or 4. 3 or 4 might be the number of records added after the initial creation.
This worked from psql:
INSERT INTO tblAliquot VALUES (55338, 6772, '2012-10-05 00:00:00', 6, E'A', NULL, NULL, 24.3,
33, 0, 1, NULL, NULL, E'1', E'A', NULL,
E'A1', NULL, 23, NULL, '2012-10-18 00:00:00', 55 );
On 12/20/19 8:19 PM, Boylan, Ross wrote:
I have a table that seems to act for some purposes as if I can't add new records to it. I would like to understand why that is and fix it.
The initial problem was that an MS-Access application using an ODBC driver (driver and database 64 bit PG 12.0) failed at
DoCmd.GoToRecord , , acNewRec
which is basically saying to create a new record in the table. The error was something like unable to create record. Unfortunately DoCmd is a black box; I can only speculate what it is doing.If I open the table in Access's default grid view, the controls to add records are likewise greyed out.
And if I open the table in pgAdmin, it likewise will not let me create a new record. So the problem is not Access-specific.
Might it be a privilege problem?
With other, similar tables, I can add new records.
And even with tblaliquot, I can add new records with sql; that's how I populated the table. Perhaps the problem is that I specified values for fields that are serial, and the counter still seems to be at 1.
The table also has lots of records relative to most other ones. It has about 72,000 records.
The only obvious difference between the tables is that the others had primary keys, and this one doesn't. Could that explain what was going on?
The lack of PK should make insertions *more* forgiving. (It really should
have a PK, but that's irrelevant to this problem.)
I had to abandon plans to rollout a conversion to Postgres because of this problem, and so I'd like to solve it so we can do the switch relatively soon.
Here's the SQL on the table from pgAdmin:
CREATE TABLE public.tblaliquot
(
aliquotid integer NOT NULL DEFAULT nextval('tblaliquot_aliquotid_seq'::regclass),
preparationid bigint,
datealiquotted date,
rnaaliquottype bigint,
sequencer character varying(4) COLLATE pg_catalog."default",
aliquotlabel character varying(40) COLLATE pg_catalog."default",
aliquotbarcode character varying(255) COLLATE pg_catalog."default",
rnaaliquotconcentration double precision,
originalvolume double precision,
numberdefrosts integer,
storagetype bigint,
locationfreezer bigint,
locationrow character varying(10) COLLATE pg_catalog."default",
locationrack character varying(10) COLLATE pg_catalog."default",
locationbox character varying(10) COLLATE pg_catalog."default",
locationplate character varying(255) COLLATE pg_catalog."default",
locationspace character varying(10) COLLATE pg_catalog."default",
locationother character varying(60) COLLATE pg_catalog."default",
aliquottedby bigint,
comments text COLLATE pg_catalog."default",
creationdate date,
createdby bigint
)The original code that created the table used AliquotID serial for the first field.
The sequence mentioned in the first field of the definition shows Current Value of 1 in the properties tab in pgAdmin, despite the large number of records. But the sequences associated with other tables also are very low, like 3 or 4. 3 or 4 might be the number of records added after the initial creation.
This worked from psql:
INSERT INTO tblAliquot VALUES (55338, 6772, '2012-10-05 00:00:00', 6, E'A', NULL, NULL, 24.3,
33, 0, 1, NULL, NULL, E'1', E'A', NULL,
E'A1', NULL, 23, NULL, '2012-10-18 00:00:00', 55 );
--
Angular momentum makes the world go 'round.
"Boylan, Ross" <Ross.Boylan@ucsf.edu> writes:
I have a table that seems to act for some purposes as if I can't add new records to it. I would like to understand why that is and fix it.
The initial problem was that an MS-Access application using an ODBC driver (driver and database 64 bit PG 12.0) failed at
DoCmd.GoToRecord , , acNewRec
which is basically saying to create a new record in the table. The error was something like unable to create record. Unfortunately DoCmd is a black box; I can only speculate what it is doing.
Aren't black-box apps fun? You might get some insight from looking into
the postmaster log to see what error the database thinks it's throwing.
Perhaps also turn on "log_statement = all" to record full details of
what SQL commands the app is sending.
Also, as I recall, the pgsql-odbc driver has some ability to do client-side
query tracing. I'm not familiar with the details of that though.
regards, tom lane
On 12/20/19 6:28 PM, Ron wrote:
The lack of PK should make insertions *more* forgiving. (It really
should have a PK, but that's irrelevant to this problem.)
Actually it is the problem:
https://www.pgadmin.org/docs/pgadmin4/4.16/editgrid.html
"If the table definition does not include an OID or a primary key, the
displayed data is read only. "
As I recall Access has the same restriction.
I had to abandon plans to rollout a conversion to Postgres because of this problem, and so I'd like to solve it so we can do the switch relatively soon.
Here's the SQL on the table from pgAdmin:
CREATE TABLE public.tblaliquot
(
aliquotid integer NOT NULL DEFAULT nextval('tblaliquot_aliquotid_seq'::regclass),
preparationid bigint,
datealiquotted date,
rnaaliquottype bigint,
sequencer character varying(4) COLLATE pg_catalog."default",
aliquotlabel character varying(40) COLLATE pg_catalog."default",
aliquotbarcode character varying(255) COLLATE pg_catalog."default",
rnaaliquotconcentration double precision,
originalvolume double precision,
numberdefrosts integer,
storagetype bigint,
locationfreezer bigint,
locationrow character varying(10) COLLATE pg_catalog."default",
locationrack character varying(10) COLLATE pg_catalog."default",
locationbox character varying(10) COLLATE pg_catalog."default",
locationplate character varying(255) COLLATE pg_catalog."default",
locationspace character varying(10) COLLATE pg_catalog."default",
locationother character varying(60) COLLATE pg_catalog."default",
aliquottedby bigint,
comments text COLLATE pg_catalog."default",
creationdate date,
createdby bigint
)The original code that created the table used AliquotID serial for the first field.
The sequence mentioned in the first field of the definition shows Current Value of 1 in the properties tab in pgAdmin, despite the large number of records. But the sequences associated with other tables also are very low, like 3 or 4. 3 or 4 might be the number of records added after the initial creation.
This worked from psql:
INSERT INTO tblAliquot VALUES (55338, 6772, '2012-10-05 00:00:00', 6, E'A', NULL, NULL, 24.3,
33, 0, 1, NULL, NULL, E'1', E'A', NULL,
E'A1', NULL, 23, NULL, '2012-10-18 00:00:00', 55 );--
Angular momentum makes the world go 'round.
--
Adrian Klaver
adrian.klaver@aklaver.com
On 12/20/19 11:02 PM, Adrian Klaver wrote:
On 12/20/19 6:28 PM, Ron wrote:
The lack of PK should make insertions *more* forgiving. (It really
should have a PK, but that's irrelevant to this problem.)Actually it is the problem:
https://www.pgadmin.org/docs/pgadmin4/4.16/editgrid.html
"If the table definition does not include an OID or a primary key, the
displayed data is read only. "As I recall Access has the same restriction.
Shows how much I use pgAdmin and Access...
I had to abandon plans to rollout a conversion to Postgres because of
this problem, and so I'd like to solve it so we can do the switch
relatively soon.Here's the SQL on the table from pgAdmin:
CREATE TABLE public.tblaliquot
(
aliquotid integer NOT NULL DEFAULT
nextval('tblaliquot_aliquotid_seq'::regclass),
preparationid bigint,
datealiquotted date,
rnaaliquottype bigint,
sequencer character varying(4) COLLATE pg_catalog."default",
aliquotlabel character varying(40) COLLATE pg_catalog."default",
aliquotbarcode character varying(255) COLLATE pg_catalog."default",
rnaaliquotconcentration double precision,
originalvolume double precision,
numberdefrosts integer,
storagetype bigint,
locationfreezer bigint,
locationrow character varying(10) COLLATE pg_catalog."default",
locationrack character varying(10) COLLATE pg_catalog."default",
locationbox character varying(10) COLLATE pg_catalog."default",
locationplate character varying(255) COLLATE pg_catalog."default",
locationspace character varying(10) COLLATE pg_catalog."default",
locationother character varying(60) COLLATE pg_catalog."default",
aliquottedby bigint,
comments text COLLATE pg_catalog."default",
creationdate date,
createdby bigint
)The original code that created the table used AliquotID serial for the
first field.The sequence mentioned in the first field of the definition shows
Current Value of 1 in the properties tab in pgAdmin, despite the large
number of records. But the sequences associated with other tables also
are very low, like 3 or 4. 3 or 4 might be the number of records added
after the initial creation.This worked from psql:
INSERT INTO tblAliquot VALUES (55338, 6772, '2012-10-05 00:00:00', 6,
E'A', NULL, NULL, 24.3,
33, 0, 1, NULL, NULL, E'1', E'A', NULL,
E'A1', NULL, 23, NULL, '2012-10-18 00:00:00', 55 );--
Angular momentum makes the world go 'round.
--
Angular momentum makes the world go 'round.
Thank you for the confirmation on the need for a primary key. I suspected that, since the GUI needs an easy way to refer to a particular row. I think I saw such a restriction in the Qt documentation on a different project (just to be
clear: no Qt involved in this one--just more evidence this is a pretty general pattern).
It's interesting that Access does not behave this way if the backend is an Access (.mdb) file; I suppose it knows enough to get some unique identifier in that case. The difference is not that the backend table in Access has a primary key; the reason the PG table lacked a primary key was that the Access table from which it was migrated didn't have one.
I've read more about sequences, and it seems mine just aren't in sync with the data, and that using sequences requires some care. Easier to discuss with an example.
CREATE TABLE tx (
xid serial,
a int8
);
This will produce a sequence tx_xid_seq.
If I do
INSERT INTO tx VALUES (3, 4);
the sequence doesn't know about it. So if I later create a default value with
INSERT INTO tx (a) VALUES (7);
or
INSERT INTO tx VALUES (DEFAULT, 9);
I'll just get the next value in the sequence.
The 3rd time I use the default value it will return 3, same as the record already there. If there is a PRIMARY KEY (or UNIQUE) constraint on xid the insertion will fail.
So....
1. When I migrate data, as I have done, I should ensure that the sequences are at safe values. The obvious way to do that would be setval using the max of the values in the data.
2. In operations, the program needs to either be consistent about getting id values from the default, or be very careful. Since I'm using an inherited application, I need to check.
When I migrate the data I do NOT want to use the sequence to generate the identifiers, since that will trash the referential integrity of the data
Here's the transcript of my tests. BTW, why is log_cnt jumping to 32 on the sequence?
testNTB=> CREATE TABLE tx (
testNTB(> xid serial,
testNTB(> a int8
testNTB(> );
CREATE TABLE
testNTB=> select * from tx_xid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)
testNTB=> INSERT INTO tx VALUES (3, 4);
INSERT 0 1
testNTB=> select * from tx_xid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)
testNTB=> select * from tx;
xid | a
-----+---
3 | 4
(1 row)
testNTB=> INSERT INTO tx (a) VALUES (7);
INSERT 0 1
testNTB=> select * from tx;
xid | a
-----+---
3 | 4
1 | 7
(2 rows)
testNTB=> select * from tx_xid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 32 | t
(1 row)
testNTB=> INSERT INTO tx VALUES (DEFAULT, 9);
INSERT 0 1
testNTB=> select * from tx_xid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
2 | 31 | t
(1 row)
testNTB=> select * from tx;
xid | a
-----+---
3 | 4
1 | 7
2 | 9
(3 rows)
testNTB=> INSERT INTO tx (a) VALUES (77);
INSERT 0 1
testNTB=> select * from tx_xid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
3 | 30 | t
(1 row)
testNTB=> select * from tx;
xid | a
-----+----
3 | 4
1 | 7
2 | 9
3 | 77
(4 rows)
--Now with a PRIMARY KEY constraint
testNTB=> CREATE TABLE ty (
testNTB(> yid serial,
testNTB(> a int8,
testNTB(> PRIMARY KEY (yid));
CREATE TABLE
testNTB=> INSERT INTO ty VALUES (2, 10);
INSERT 0 1
testNTB=> select * from ty;
yid | a
-----+----
2 | 10
(1 row)
testNTB=> select * from ty_yid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)
testNTB=> INSERT INTO ty VALUES (DEFAULT, 20);
INSERT 0 1
testNTB=> select * from ty;
yid | a
-----+----
2 | 10
1 | 20
(2 rows)
testNTB=> select * from ty_yid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 32 | t
(1 row)
testNTB=> INSERT INTO ty VALUES (DEFAULT, 30);
ERROR: duplicate key value violates unique constraint "ty_pkey"
DETAIL: Key (yid)=(2) already exists.
On 12/21/19 12:21 PM, Boylan, Ross wrote:
Thank you for the confirmation on the need for a primary key. I suspected that, since the GUI needs an easy way to refer to a particular row. I think I saw such a restriction in the Qt documentation on a different project (just to be
clear: no Qt involved in this one--just more evidence this is a pretty general pattern).It's interesting that Access does not behave this way if the backend is an Access (.mdb) file; I suppose it knows enough to get some unique identifier in that case. The
"When you create a new table in Datasheet view, Access automatically
creates a primary key for you and assigns it a field name of "ID" and
the AutoNumber data type."
difference is not that the backend table in Access has a primary key;
the reason the PG table lacked a primary key was that the Access table
from which it was migrated didn't have one.
My guess is the migration process missed that aliquotid was the PK.
I've read more about sequences, and it seems mine just aren't in sync with the data, and that using sequences requires some care. Easier to discuss with an example.
CREATE TABLE tx (
xid serial,
a int8
);
This will produce a sequence tx_xid_seq.If I do
INSERT INTO tx VALUES (3, 4);
the sequence doesn't know about it. So if I later create a default value with
INSERT INTO tx (a) VALUES (7);
or
INSERT INTO tx VALUES (DEFAULT, 9);
I'll just get the next value in the sequence.The 3rd time I use the default value it will return 3, same as the record already there. If there is a PRIMARY KEY (or UNIQUE) constraint on xid the insertion will fail.
So....
1. When I migrate data, as I have done, I should ensure that the sequences are at safe values. The obvious way to do that would be setval using the max of the values in the data.
2. In operations, the program needs to either be consistent about getting id values from the default, or be very careful. Since I'm using an inherited application, I need to check.When I migrate the data I do NOT want to use the sequence to generate the identifiers, since that will trash the referential integrity of the data
That rather depends on what you using to migrate the data. Postgres
pg_dump will not do that if you dump the table.
Here's the transcript of my tests. BTW, why is log_cnt jumping to 32 on the sequence?
log_cnt is an internal counter:
/messages/by-id/7487.1049476267@sss.pgh.pa.us
and does not really apply to the sequence value. The value is determined
by last_value and is_called. For more information see:
https://www.postgresql.org/docs/11/sql-createsequence.html
You skipped over the sequence by starting with a value of 2 and then you
went back and used the sequence, so when it got to 2 it threw a
duplicate error which is correct as the field was a PK. Best practice on
a sequence is to let it run on its own and not try to override it. If
you do then you will need to familiarize yourself with the functions here:
https://www.postgresql.org/docs/11/functions-sequence.html
--Now with a PRIMARY KEY constraint
testNTB=> CREATE TABLE ty (
testNTB(> yid serial,
testNTB(> a int8,
testNTB(> PRIMARY KEY (yid));
CREATE TABLE
testNTB=> INSERT INTO ty VALUES (2, 10);
INSERT 0 1
testNTB=> select * from ty;
yid | a
-----+----
2 | 10
(1 row)testNTB=> select * from ty_yid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 0 | f
(1 row)testNTB=> INSERT INTO ty VALUES (DEFAULT, 20);
INSERT 0 1
testNTB=> select * from ty;
yid | a
-----+----
2 | 10
1 | 20
(2 rows)testNTB=> select * from ty_yid_seq;
last_value | log_cnt | is_called
------------+---------+-----------
1 | 32 | t
(1 row)testNTB=> INSERT INTO ty VALUES (DEFAULT, 30);
ERROR: duplicate key value violates unique constraint "ty_pkey"
DETAIL: Key (yid)=(2) already exists.
--
Adrian Klaver
adrian.klaver@aklaver.com
My mail interface (Outlook on the Web) really can't quote properly, so I'll just do snips.
"When you create a new table in Datasheet view, Access automatically
creates a primary key for you and assigns it a field name of "ID" and
the AutoNumber data type."
That quote, and the documentation mentioned before it, is not directly relevant, since I'm not creating new tables in Access
My guess is the migration process missed that aliquotid was the PK.
Substantively aliquotid is the primary key, but in terms of formal table properties there is no primary key. The export process does create primary keys in PG for tables that have them in Access. My export process also has no foreign key relations.
It is quite likely that subjecting the current data to "should be there" constraints on primary and foreign keys will reveal features of the data that shouldn't be there, such as missing or lost references or values.
Also, if I impose those constraints when I create the table definitions, the import will become much more order sensitive, I think. If I import a table with a foreign key in a table not yet imported, it will presumably fail. And since tables can refer to each other, there may be no import order that will work. So I guess I better add foreign key constraints after the main import. Of course, that will require valid data, but that's a separate problem.
I migrated by using a slightly modified version of
' exportSQL version 3.2-dev
' www.rot13.org/~dpavlin/projects.html#sql
'
' based on exportSQL version 2.0 from www.cynergi.net/prod/exportsql/
'
' (C) 1997-98 CYNERGI - www.cynergi.net, info@cynergi.net
' (C) Pedro Freire - pedro.freire@cynergi.net (do not add to mailing lists without permission)
' (c) 2000-2001 Dobrica Pavlinusic <dpavlin@rot13.org> - added PostgreSQL support
It needed some tweaks to work with current PG. It does preserve primary key values.
Thanks for the references on log_cnt and sequences. I can see that just using the defaults is the easiest path, but I clearly can't do that on import. Cleaning the sequence up after import seems straightforward, though the export code isn't doing it. Whether the main application relies strictly on defaults I don't know.
Ross
On 12/21/19 1:41 PM, Boylan, Ross wrote:
My mail interface (Outlook on the Web) really can't quote properly, so I'll just do snips.
"When you create a new table in Datasheet view, Access automatically
creates a primary key for you and assigns it a field name of "ID" and
the AutoNumber data type."That quote, and the documentation mentioned before it, is not directly relevant, since I'm not creating new tables in Access
My guess is the migration process missed that aliquotid was the PK.
Substantively aliquotid is the primary key, but in terms of formal table properties there is no primary key. The export process does create primary keys in PG for tables that have them in Access. My export process also has no foreign key relations.
It is quite likely that subjecting the current data to "should be there" constraints on primary and foreign keys will reveal features of the data that shouldn't be there, such as missing or lost references or values.Also, if I impose those constraints when I create the table definitions, the import will become much more order sensitive, I think. If I import a table with a foreign key in a table not yet imported, it will presumably fail. And since tables can refer to each other, there may be no import order that will work. So I guess I better add foreign key constraints after the main import. Of course, that will require valid data, but that's a separate problem.
I migrated by using a slightly modified version of
' exportSQL version 3.2-dev
' www.rot13.org/~dpavlin/projects.html#sql
'
' based on exportSQL version 2.0 from www.cynergi.net/prod/exportsql/
'
' (C) 1997-98 CYNERGI - www.cynergi.net, info@cynergi.net
' (C) Pedro Freire - pedro.freire@cynergi.net (do not add to mailing lists without permission)
' (c) 2000-2001 Dobrica Pavlinusic <dpavlin@rot13.org> - added PostgreSQL supportIt needed some tweaks to work with current PG. It does preserve primary key values.
Thanks for the references on log_cnt and sequences. I can see that just using the defaults is the easiest path, but I clearly can't do that on import. Cleaning the sequence up after import seems straightforward, though the export code isn't doing it. Whether the main application relies strictly on defaults I don't know.
This might be easier to figure out if you outline what is going on:
1) The purpose of the migration?
2) A general sense of what the application is and what it does.
3) Have you looked at the Relations tab in Access to see what if any
relationships are there?
Ross
--
Adrian Klaver
adrian.klaver@aklaver.com
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: Saturday, December 21, 2019 3:37 PM
This might be easier to figure out if you outline what is going on:
Since I seem to have gone on in my responses, let me do one-line answers before the fuller ones.
1) The purpose of the migration?
Primarily to use currently supported software. Secondarily to improve data integrity, security and auditability.
2) A general sense of what the application is and what it does.
A GUI for managing medical specimens and associated information for multiple research studies.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
Yes and yes. The migration program doesn't currently use that information, and there are some complexities.
Now the more elaborate answers:
1) The purpose of the migration?
The immediate purpose of the migration is to use software that is supported. The application currently runs on Windows 7, Office 2010 32 bit. As of Jan 14 we have to be off Win 7 because the University says so (because MS said so) and will be disabling such systems. Office 2010, even now, can't be installed because of licensing. We have to use Win 10, Office 2016 (even though 2019 is available). We can do either 32 or 64 bit office and decided to target 64 bit.
We currently use Access's split database configuration, meaning the "backend" is a file on a shared drive.
Since we had to go to the pain of migrating anyway, this seemed a good time to switch to a server-based backend. Although the Access split configuration has worked, having multiple users touching the same file always makes me uncomfortable, and a real database server would seem to offer better assurances of data integrity, security, and auditability. Since the databases store sensitive medical information, these are concerns not only for us but for our funders and other oversight bodies. Historically, the requirements have gotten increasingly stringent, and it seems to me there is some possibility that the Access "backend" will fall short of the requirements in the future.
Another consideration is that MS is increasingly deemphasizing using Access as a data store. Of course, they want people to go to MS SQL Server. When I visited MS's web page for Access 2016 I couldn't find any statement that it could be used without a server-based backed, even though it can. But depending on a feature that's getting so studiously ignored seems risky.
Finally, I had some really bad experiences--that is, lost a day--trying to get queries to work that wouldn't, because MS Access SQL just isn't quite SQL. I was hoping to avoid that in the future.
Because of the time pressure, we'll be sticking with the file-based backend for now.
The front-end application (described next) is built on Access and is fairly substantial; migrating it to another platform seems not worth it.
2) A general sense of what the application is and what it does.
The application is a GUI for relatively non-computer-technical users. They run medical research studies, and each time someone comes in various tests are performed and recorded, and specimens collected. Other health-related information is also collected. The core function is the management of biological specimens that result.
We also serve as a repository for specimens collected at other sites. There are various types of specimens and various procedures that can be performed on each.
Researchers then query the database by outlining what kind of specimens they want and getting a list of specimens. Usually they do it by asking me, and I do the queries.
The actual amount of data is not trivial, but is not that large by current standards. The file-based backends are around 20MB (after a compact and repair), and the largest tables have around 100K records. I don't think there's anything there that requires us to use 64 bits. The data are very valuable, in that they represent over a decade's work, lots of $ of effort, and without them the physical specimens would be essentially useless.
The number of users, esp simultaneous users, is also relatively small, around 10.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
Yes, but the export program doesn't :) The relations tab documents many, but not all, of the relations in the database. The relations are also a little tricky because sometimes the lack of a relation should not be considered disqualifying for a specimen. Simple example: freezer type is an id to be looked up in a small table of freezer type ids and their names. If the freezer type is missing or nonsense, we may still want the sample. That can be expressed as a left join; the "Access SQL is not SQL" problems centered on left joins.
Ross
On 12/21/19 6:40 PM, Boylan, Ross wrote:
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: Saturday, December 21, 2019 3:37 PMThis might be easier to figure out if you outline what is going on:
Since I seem to have gone on in my responses, let me do one-line answers before the fuller ones.
1) The purpose of the migration?
Primarily to use currently supported software. Secondarily to improve data integrity, security and auditability.
2) A general sense of what the application is and what it does.
A GUI for managing medical specimens and associated information for multiple research studies.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
Yes and yes. The migration program doesn't currently use that information, and there are some complexities.
Now the more elaborate answers:
1) The purpose of the migration?
The immediate purpose of the migration is to use software that is supported. The application currently runs on Windows 7, Office 2010 32 bit. As of Jan 14 we have to be off Win 7 because the University says so (because MS said so) and will be disabling such systems. Office 2010, even now, can't be installed because of licensing. We have to use Win 10, Office 2016 (even though 2019 is available). We can do either 32 or 64 bit office and decided to target 64 bit.
We currently use Access's split database configuration, meaning the "backend" is a file on a shared drive.
Since we had to go to the pain of migrating anyway, this seemed a good time to switch to a server-based backend. Although the Access split configuration has worked, having multiple users touching the same file always makes me uncomfortable, and a real database server would seem to offer better assurances of data integrity, security, and auditability. Since the databases store sensitive medical information, these are concerns not only for us but for our funders and other oversight bodies. Historically, the requirements have gotten increasingly stringent, and it seems to me there is some possibility that the Access "backend" will fall short of the requirements in the future.
Another consideration is that MS is increasingly deemphasizing using Access as a data store. Of course, they want people to go to MS SQL Server. When I visited MS's web page for Access 2016 I couldn't find any statement that it could be used without a server-based backed, even though it can. But depending on a feature that's getting so studiously ignored seems risky.
Finally, I had some really bad experiences--that is, lost a day--trying to get queries to work that wouldn't, because MS Access SQL just isn't quite SQL. I was hoping to avoid that in the future.
Because of the time pressure, we'll be sticking with the file-based backend for now.
Alright this is the part where I got confused. I think what is going on is:
1) The immediate change is going to be to Access 2016 on Windows 10 64
keeping the data in Access files(.accdb)
2) The long term plan is to move the data from the Access files to a
stand alone SQL server, presumably Postgres. You are looking for
assistance in converting Access tables and relationships to Postgres.
3) At some point the GUI will point to the data in the server instead of
in the Access files.
Is any of the above correct?
More below.
The front-end application (described next) is built on Access and is fairly substantial; migrating it to another platform seems not worth it.
2) A general sense of what the application is and what it does.
The application is a GUI for relatively non-computer-technical users. They run medical research studies, and each time someone comes in various tests are performed and recorded, and specimens collected. Other health-related information is also collected. The core function is the management of biological specimens that result.
We also serve as a repository for specimens collected at other sites. There are various types of specimens and various procedures that can be performed on each.
Researchers then query the database by outlining what kind of specimens they want and getting a list of specimens. Usually they do it by asking me, and I do the queries.
The actual amount of data is not trivial, but is not that large by current standards. The file-based backends are around 20MB (after a compact and repair), and the largest tables have around 100K records. I don't think there's anything there that requires us to use 64 bits. The data are very valuable, in that they represent over a decade's work, lots of $ of effort, and without them the physical specimens would be essentially useless.
The number of users, esp simultaneous users, is also relatively small, around 10.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
Yes, but the export program doesn't :) The relations tab documents many, but not all, of the relations in the database. The relations are also a little tricky because sometimes the lack of a relation should not be considered disqualifying for a specimen. Simple example: freezer type is an id to be looked up in a small table of freezer type ids and their names. If the freezer type is missing or nonsense, we may still want the sample. That can be expressed as a left join; the "Access SQL is not SQL" problems centered on left joins.
Postgres is going to be stricter about this. Access has the concept of
suggested relationships that are not enforced:
"A table relationship is represented by a relationship line drawn
between tables in the Relationships window. A relationship that does not
enforce referential integrity appears as a thin line between the common
fields supporting the relationship. When you select the relationship by
clicking its line, the line thickens to indicate it is selected. If you
enforce referential integrity for this relationship, the line appears
thicker at each end. In addition, the number 1 appears over the thick
portion of the line on one side of the relationship, and the infinity
symbol (∞) appears over the thick portion of the line on the other side."
Postgres Foreign Key relationships either exist or they don't. Now you
can employee 'cheats' with you own triggers, but that is another subject.
As to your freezer example:
create table freezer_info(freezer_id int PRIMARY KEY);
create table sample_tbl(sample_id serial PRIMARY KEY, freezer_fk int
REFERENCES freezer_info ON UPDATE CASCADE);
insert into freezer_info values (1), (2);
select * from freezer_info ;
freezer_id
------------
1
2
insert into sample_tbl(freezer_fk) values (1), (null);
select * from sample_tbl;
sample_id | freezer_fk
-----------+------------
1 | 1
2 | NULL
The above will not cover the case where freezer_fk is a nonsense
value(no relation to freezer_info). In that situation you would probably
need to change the nonsense value to NULL or some placeholder value.
Ross
--
Adrian Klaver
adrian.klaver@aklaver.com
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: Sunday, December 22, 2019 10:35 AM
.....
Alright this is the part where I got confused. I think what is going on is:
1) The immediate change is going to be to Access 2016 on Windows 10 64
keeping the data in Access files(.accdb)
2) The long term plan is to move the data from the Access files to a
stand alone SQL server, presumably Postgres. You are looking for
assistance in converting Access tables and relationships to Postgres.
3) At some point the GUI will point to the data in the server instead of
in the Access files.
Is any of the above correct?
Basically, yes.
1) The current file is in the older Access .mdb format, and will stay that way for the immediate conversion to Win 10/Access 2016.
2) I'd like to go to Postgres medium term. I have converted the Access data to Postgres via an Access/VBA program that writes out SQL files. It clearly needs some work.
I think I may have everything I need to tweak it.
3) Basically yes, though in detail the GUI and the code behind it directly use "linked tables" that are local to the front end. These linked tables are the Access equivalent of symlinks to a particular table on a particular database. They will use the PG ODBC driver to do the talking. The app uses those linked tables now, but the link simply points to the backend file on the disk.
One other issue came up in testing: searches against the Access backend are case insensitive, while searches against the PG backend are case-sensitive. It would be nice to make them case-insensitive, and I've been reading about PG collation options to do so.
Case-sensitivity is completely distinct from all the other issues I've discussed. The sense was also that the problem/difference was not a show-stopper. Current behavior is that if you search for id abc123 it will retrieve an id stored as ABC123. With the PG backend, searching on abc123 retrieves nothing.
More below.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
....
Yes, but the export program doesn't :) The relations tab documents many, but not all, of the relations in the database. The relations are also a little tricky because sometimes the lack of a relation should not be considered disqualifying for a specimen. Simple example: freezer type is an id to be looked up in a small table of freezer type ids and their names. If the freezer type is missing or nonsense, we may still want the sample. That can be expressed as a left join; the "Access SQL is not SQL" problems centered on left joins.
Postgres is going to be stricter about this. Access has the concept of
suggested relationships that are not enforced:
Thanks for the pointer. I'm not sure if we're using any of those suggested relationships, but we're definitely using relationships that one infers only by looking at the SQL code. In other words, queries link tables by fields that are not mentioned on the relationships tab.
......
Postgres Foreign Key relationships either exist or they don't. Now you
can employee 'cheats' with you own triggers, but that is another subject.
As to your freezer example:
create table freezer_info(freezer_id int PRIMARY KEY);
create table sample_tbl(sample_id serial PRIMARY KEY, freezer_fk int
REFERENCES freezer_info ON UPDATE CASCADE);
The direct translation of our current app would be to omit the foreign key relation in the table definition, but to have queries that include a left join from sample_tbl to freezer_info.
That raises one other question: what is the relation between the relations expressed on the server via FOREIGN KEY declarations and relations known to Access? At first blush, they are completely different. In Access a relation is an object that lives on the front-end, while the server relations obviously live on the server. I don't know if Access, perhaps via the ODBC spec, does anything to guarantee their consistency. Clearly since Access relations can have "suggested relations" which have no backend equivalent, the 2 sets of relations cannot in general be mirror images.
Identifying and enforcing all foreign key relations, is something I'm planning to defer til after the migration to PG. There is a noticeable amount of funky stuff in the data, which could use some housekeeping regardless of platform. The cleanup is not likely to be simple.
One of the long-run benefits of using a database server is that it should limit the possibilities for funkiness by enforcing referential integrity and properly cleaning up after incomplete operations, instead of leaving pieces of them in the database. But we can't enforce referential integrity until our data exhibit it! And the transactional integrity probably depends more on proper coding on the front-end app than the choice of backend; file-based Access backends do support transactions.
Ross
On 12/22/19 11:59 AM, Boylan, Ross wrote:
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: Sunday, December 22, 2019 10:35 AM.....
Alright this is the part where I got confused. I think what is going on is:
1) The immediate change is going to be to Access 2016 on Windows 10 64
keeping the data in Access files(.accdb)
2) The long term plan is to move the data from the Access files to a
stand alone SQL server, presumably Postgres. You are looking for
assistance in converting Access tables and relationships to Postgres.3) At some point the GUI will point to the data in the server instead of
in the Access files.
Is any of the above correct?
Basically, yes.
1) The current file is in the older Access .mdb format, and will stay that way for the immediate conversion to Win 10/Access 2016.
2) I'd like to go to Postgres medium term. I have converted the Access data to Postgres via an Access/VBA program that writes out SQL files. It clearly needs some work.
The long term solution is?
More answers inline below.
I think I may have everything I need to tweak it.
3) Basically yes, though in detail the GUI and the code behind it directly use "linked tables" that are local to the front end. These linked tables are the Access equivalent of symlinks to a particular table on a particular database. They will use the PG ODBC driver to do the talking. The app uses those linked tables now, but the link simply points to the backend file on the disk.
So at some point you will switch the link from being Access native to a
ODBC DSN pointing at a Postgres database, correct?
One other issue came up in testing: searches against the Access backend are case insensitive, while searches against the PG backend are case-sensitive. It would be nice to make them case-insensitive, and I've been reading about PG collation options to do so.
Case-sensitivity is completely distinct from all the other issues I've discussed. The sense was also that the problem/difference was not a show-stopper. Current behavior is that if you search for id abc123 it will retrieve an id stored as ABC123. With the PG backend, searching on abc123 retrieves nothing.
How about:
test=# select 'abc123' ilike 'ABC123';
?column?
----------
t
(1 row)
More below.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
....
Yes, but the export program doesn't :) The relations tab documents many, but not all, of the relations in the database. The relations are also a little tricky because sometimes the lack of a relation should not be considered disqualifying for a specimen. Simple example: freezer type is an id to be looked up in a small table of freezer type ids and their names. If the freezer type is missing or nonsense, we may still want the sample. That can be expressed as a left join; the "Access SQL is not SQL" problems centered on left joins.
Postgres is going to be stricter about this. Access has the concept of
suggested relationships that are not enforced:
Thanks for the pointer. I'm not sure if we're using any of those suggested relationships, but we're definitely using relationships that one infers only by looking at the SQL code. In other words, queries link tables by fields that are not mentioned on the relationships tab.
......
Postgres Foreign Key relationships either exist or they don't. Now you
can employee 'cheats' with you own triggers, but that is another subject.
As to your freezer example:
create table freezer_info(freezer_id int PRIMARY KEY);
create table sample_tbl(sample_id serial PRIMARY KEY, freezer_fk int
REFERENCES freezer_info ON UPDATE CASCADE);The direct translation of our current app would be to omit the foreign key relation in the table definition, but to have queries that include a left join from sample_tbl to freezer_info.
That raises one other question: what is the relation between the relations expressed on the server via FOREIGN KEY declarations and relations known to Access? At first blush, they are completely different. In Access a relation is an object that lives on the front-end, while the server relations obviously live on the server. I don't know if Access, perhaps via the ODBC spec, does anything to guarantee their consistency. Clearly since Access relations can have "suggested relations" which have no backend equivalent, the 2 sets of relations cannot in general be mirror images.
Trying to parse the above out:
1) When you say relations you mean Foreign Key or equivalent, correct?
Just confirming as relation is often used to mean table.
2) As I remember it an 'enforced relationship' in Access is actually a
FK, whereas one that is not is just stored as metadata somewhere.
3) Are you are asking whether it possible for a table in an Access table
can have a direct relationship with a table stored in Postgres and
connected via ODBC? If so my answer is that I am not sure. I believe you
can use the Relations tab to set up a faux relationship, but that will
only be honored in the context of Access and only if you want it to.
4) You can set up an actual FK on the server between server tables, with
the data clean up caveats you mention below.
5) At this point it seems your data is going to live in two worlds and
in the end it will fall on you to maintain the integrity between the worlds.
Identifying and enforcing all foreign key relations, is something I'm planning to defer til after the migration to PG. There is a noticeable amount of funky stuff in the data, which could use some housekeeping regardless of platform. The cleanup is not likely to be simple.
One of the long-run benefits of using a database server is that it should limit the possibilities for funkiness by enforcing referential integrity and properly cleaning up after incomplete operations, instead of leaving pieces of them in the database. But we can't enforce referential integrity until our data exhibit it! And the transactional integrity probably depends more on proper coding on the front-end app than the choice of backend; file-based Access backends do support transactions.
Ross
--
Adrian Klaver
adrian.klaver@aklaver.com
1. Timelines
The long term solution is?
I just meant I'm hoping to switch to Postgres soonish.
So at some point you will switch the link from being Access native to a
ODBC DSN pointing at a Postgres database, correct?
The switch will be to ODBC but no DSN. You can specify the connection properties directly: https://docs.microsoft.com/en-us/office/troubleshoot/access/create-dsn-less-connection-linkted-table.
Switching to ODBC proved the most challenging part of the transition so far, but it's done. I don't mean that going DSN-less was challenging; I mean switching to ODBC was challenging. For the gory details: https://social.msdn.microsoft.com/Forums/en-US/eea6a780-488a-4154-97ac-ae318a870993/unable-to-relink-tables?forum=accessdev
This was something that should have worked through the Access GUI (at least using a DSN), and in fact MS's documentation said it would work. The actual Linked Table Manager did not behave as documented.
2. Case Sensitivity
How about:
test=# select 'abc123' ilike 'ABC123';
That would work, but it depends on finding and rewriting all the places such a search is made. I would also have to remember to do it whenever I did a search. Hence my interest in a more wholesale solution.
3. "Relations"
I wrote
That raises one other question: what is the relation between the relations expressed on the server via FOREIGN KEY declarations and relations known to Access? At first blush, they are completely different. In Access a relation is an object that lives on the front-end, while the server relations obviously live on the server. I don't know if Access, perhaps via the ODBC spec, does anything to guarantee their consistency. Clearly since Access relations can have "suggested relations" which have no backend equivalent, the 2 sets of relations cannot in general be mirror images.
You replied:
Trying to parse the above out:
1) When you say relations you mean Foreign Key or equivalent, correct?
Just confirming as relation is often used to mean table.
Correct: I do not mean a table when discussing relations above. I do mean either a foreign key relationship or a Relation object as defined in Access (represented in the Access GUI on the relationship tab).
2) As I remember it an 'enforced relationship' in Access is actually a FK, whereas one that is not is just stored as metadata somewhere.
That raises the possibility that some of the Access Relation objects are constructed dynamically from info on foreign keys in the backend. If that's the case it would diminish the coordination problems.
3) Are you are asking whether it possible for a table in an Access table
can have a direct relationship with a table stored in Postgres and
connected via ODBC? If so my answer is that I am not sure. I believe you
can use the Relations tab to set up a faux relationship, but that will
only be honored in the context of Access and only if you want it to.
No, not my question. My concern is that, since there are two different representations of the the relations in the data, one on the server expressed as foreign keys and one on the frontend/Access as Relation objects, they could be out of sync.
4) You can set up an actual FK on the server between server tables, with
the data clean up caveats you mention below.
Yes.
5) At this point it seems your data is going to live in two worlds and
in the end it will fall on you to maintain the integrity between the worlds.
That's what I'm afraid of.
The data itself lives only on the backend, but information about it lives in both places.
I'm not sure that the relationship synchronization issue is limited to cases with a server backend. Even the file based data, managed directly by Access, has foreign key relations in it (I'm pretty sure), and those too could be out of sync with the Relation objects describing that same data.
I also probably need a better understanding of when operations happen in Access SQL vs being passed through to the server.
Ross
________________________________________
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: Sunday, December 22, 2019 2:23 PM
To: Boylan, Ross; Ron; pgsql-general@lists.postgresql.org
Subject: Re: Semi-unable to add new records to table--primary key needed?
On 12/22/19 11:59 AM, Boylan, Ross wrote:
From: Adrian Klaver <adrian.klaver@aklaver.com>
Sent: Sunday, December 22, 2019 10:35 AM.....
Alright this is the part where I got confused. I think what is going on is:
1) The immediate change is going to be to Access 2016 on Windows 10 64
keeping the data in Access files(.accdb)
2) The long term plan is to move the data from the Access files to a
stand alone SQL server, presumably Postgres. You are looking for
assistance in converting Access tables and relationships to Postgres.3) At some point the GUI will point to the data in the server instead of
in the Access files.
Is any of the above correct?
Basically, yes.
1) The current file is in the older Access .mdb format, and will stay that way for the immediate conversion to Win 10/Access 2016.
2) I'd like to go to Postgres medium term. I have converted the Access data to Postgres via an Access/VBA program that writes out SQL files. It clearly needs some work.
The long term solution is?
More answers inline below.
I think I may have everything I need to tweak it.
3) Basically yes, though in detail the GUI and the code behind it directly use "linked tables" that are local to the front end. These linked tables are the Access equivalent of symlinks to a particular table on a particular database. They will use the PG ODBC driver to do the talking. The app uses those linked tables now, but the link simply points to the backend file on the disk.
So at some point you will switch the link from being Access native to a
ODBC DSN pointing at a Postgres database, correct?
One other issue came up in testing: searches against the Access backend are case insensitive, while searches against the PG backend are case-sensitive. It would be nice to make them case-insensitive, and I've been reading about PG collation options to do so.
Case-sensitivity is completely distinct from all the other issues I've discussed. The sense was also that the problem/difference was not a show-stopper. Current behavior is that if you search for id abc123 it will retrieve an id stored as ABC123. With the PG backend, searching on abc123 retrieves nothing.
How about:
test=# select 'abc123' ilike 'ABC123';
?column?
----------
t
(1 row)
More below.
3) Have you looked at the Relations tab in Access to see what if any relationships are there?
....
Yes, but the export program doesn't :) The relations tab documents many, but not all, of the relations in the database. The relations are also a little tricky because sometimes the lack of a relation should not be considered disqualifying for a specimen. Simple example: freezer type is an id to be looked up in a small table of freezer type ids and their names. If the freezer type is missing or nonsense, we may still want the sample. That can be expressed as a left join; the "Access SQL is not SQL" problems centered on left joins.
Postgres is going to be stricter about this. Access has the concept of
suggested relationships that are not enforced:
Thanks for the pointer. I'm not sure if we're using any of those suggested relationships, but we're definitely using relationships that one infers only by looking at the SQL code. In other words, queries link tables by fields that are not mentioned on the relationships tab.
......
Postgres Foreign Key relationships either exist or they don't. Now you
can employee 'cheats' with you own triggers, but that is another subject.
As to your freezer example:
create table freezer_info(freezer_id int PRIMARY KEY);
create table sample_tbl(sample_id serial PRIMARY KEY, freezer_fk int
REFERENCES freezer_info ON UPDATE CASCADE);The direct translation of our current app would be to omit the foreign key relation in the table definition, but to have queries that include a left join from sample_tbl to freezer_info.
That raises one other question: what is the relation between the relations expressed on the server via FOREIGN KEY declarations and relations known to Access? At first blush, they are completely different. In Access a relation is an object that lives on the front-end, while the server relations obviously live on the server. I don't know if Access, perhaps via the ODBC spec, does anything to guarantee their consistency. Clearly since Access relations can have "suggested relations" which have no backend equivalent, the 2 sets of relations cannot in general be mirror images.
Trying to parse the above out:
1) When you say relations you mean Foreign Key or equivalent, correct?
Just confirming as relation is often used to mean table.
2) As I remember it an 'enforced relationship' in Access is actually a
FK, whereas one that is not is just stored as metadata somewhere.
3) Are you are asking whether it possible for a table in an Access table
can have a direct relationship with a table stored in Postgres and
connected via ODBC? If so my answer is that I am not sure. I believe you
can use the Relations tab to set up a faux relationship, but that will
only be honored in the context of Access and only if you want it to.
4) You can set up an actual FK on the server between server tables, with
the data clean up caveats you mention below.
5) At this point it seems your data is going to live in two worlds and
in the end it will fall on you to maintain the integrity between the worlds.
Identifying and enforcing all foreign key relations, is something I'm planning to defer til after the migration to PG. There is a noticeable amount of funky stuff in the data, which could use some housekeeping regardless of platform. The cleanup is not likely to be simple.
One of the long-run benefits of using a database server is that it should limit the possibilities for funkiness by enforcing referential integrity and properly cleaning up after incomplete operations, instead of leaving pieces of them in the database. But we can't enforce referential integrity until our data exhibit it! And the transactional integrity probably depends more on proper coding on the front-end app than the choice of backend; file-based Access backends do support transactions.
Ross
--
Adrian Klaver
adrian.klaver@aklaver.com
On 12/22/19 4:02 PM, Boylan, Ross wrote:
5) At this point it seems your data is going to live in two worlds and
in the end it will fall on you to maintain the integrity between the worlds.
That's what I'm afraid of.
The data itself lives only on the backend, but information about it lives in both places.
To make things more interesting table relationships can be hidden in the
Relations tab:
"On the Design tab, in the Relationships group, click All Relationships.
This displays all of the defined relationships in your database. Note
that hidden tables (tables for which the Hidden check box in the table's
Properties dialog box is selected) and their relationships will not be
shown unless the Show Hidden Objects check box is selected in the
Navigation Options dialog box."
I'm not sure that the relationship synchronization issue is limited to cases with a server backend. Even the file based data, managed directly by Access, has foreign key relations in it (I'm pretty sure), and those too could be out of sync with the Relation objects describing that same data.
Pretty sure that FK's show up in the Relations tab as:
" If you enforce referential integrity for this relationship, the line
appears thicker at each end. In addition, the number 1 appears over the
thick portion of the line on one side of the relationship, and the
infinity symbol (∞) appears over the thick portion of the line on the
other side."
I also probably need a better understanding of when operations happen in Access SQL vs being passed through to the server.
Yeah.
The Postgres side can be dealt with by looking at the server log. You
can also enable the ODBC logging, though right now I cannot remember how
to do that.
Don't know how you would do that for the internal Access side of things.
Ross
--
Adrian Klaver
adrian.klaver@aklaver.com