Subject | Help - out of memory error while activating index |
---|---|
Author | idiojet |
Post date | 2003-11-30T14:26:27Z |
Hi,
I am trying to make two inactive alternate indices on a large table active=
and receiving the message "Sort Error: Out of Memory" each time I try to
set either one active.
The ALTER INDEX ... ACTIVE command runs for about an hour and fails with th=
e
above error leaving a temp file of about 650 MB. I've set
TMP_DIRECTORY 5000000000 "D:\TEMP\"
in ibconfig, and am using Firebird 1.0 on a Pentium IV with about 100MB swa=
p
on C:, 600MB swap on D: and 8-9 GB spare on D:.
I'm aware that the indices are probably as large as the table itself, but =
the table is re-filled once every three months and then many SELECT queries=
are run against it, so the process has been working well. I'd like to stick=
with it until I can re-design the database to tokenize the data.
The table has 6 milion rows and it's structure and indices are as follows: =
FORMAT_ID INTEGER Not Null
CUSTOMER_ID INTEGER Not Null
SURNAME VARCHAR(100) CHARACTER SET WIN1252 Nullable=
FORENAMES VARCHAR(100) CHARACTER SET WIN1252 Nullable=
CONSTRAINT PK_TEMP_CUSTOMERS:
Primary key (CUSTOMER_ID, FORMAT_ID)
AI_CUST_FORENAMES_SURNAME INDEX ON
TEMP_CUSTOMERS(FORENAMES, SURNAME)
AI_CUST_SURNAME_FORENAMES INDEX ON TEMP_CUSTOMERS(SURNAME,
FORENAMES)
RDB$PRIMARY22 UNIQUE INDEX ON TEMP_CUSTOMERS(CUSTOMER_ID,
FORMAT_ID)
I'd be grateful for any suggestions on how I can remedy this 'cos I'm at my=
wit's end trying to find a solution.
Thanks,
Jon.
I am trying to make two inactive alternate indices on a large table active=
and receiving the message "Sort Error: Out of Memory" each time I try to
set either one active.
The ALTER INDEX ... ACTIVE command runs for about an hour and fails with th=
e
above error leaving a temp file of about 650 MB. I've set
TMP_DIRECTORY 5000000000 "D:\TEMP\"
in ibconfig, and am using Firebird 1.0 on a Pentium IV with about 100MB swa=
p
on C:, 600MB swap on D: and 8-9 GB spare on D:.
I'm aware that the indices are probably as large as the table itself, but =
the table is re-filled once every three months and then many SELECT queries=
are run against it, so the process has been working well. I'd like to stick=
with it until I can re-design the database to tokenize the data.
The table has 6 milion rows and it's structure and indices are as follows: =
FORMAT_ID INTEGER Not Null
CUSTOMER_ID INTEGER Not Null
SURNAME VARCHAR(100) CHARACTER SET WIN1252 Nullable=
FORENAMES VARCHAR(100) CHARACTER SET WIN1252 Nullable=
CONSTRAINT PK_TEMP_CUSTOMERS:
Primary key (CUSTOMER_ID, FORMAT_ID)
AI_CUST_FORENAMES_SURNAME INDEX ON
TEMP_CUSTOMERS(FORENAMES, SURNAME)
AI_CUST_SURNAME_FORENAMES INDEX ON TEMP_CUSTOMERS(SURNAME,
FORENAMES)
RDB$PRIMARY22 UNIQUE INDEX ON TEMP_CUSTOMERS(CUSTOMER_ID,
FORMAT_ID)
I'd be grateful for any suggestions on how I can remedy this 'cos I'm at my=
wit's end trying to find a solution.
Thanks,
Jon.