Subject | Re: varchar fields and memory |
---|---|
Author | kaczy27 |
Post date | 2004-09-08T10:39:41Z |
> > I've always thought that defining the field as varchar is a wayto
> > avoid large memory consumption.itself
>
> > I have a table that store text data, and although the data
> > rarely exceed 50 characters I set it to be varchar (1024).The problem is that I have to set a SAFE size, the data rarely
>
> I usually set it to what the maximum would be. So, if in your case
> the maximum could be 100 chars, then set it to varchar(100).
exceed the 50 characters but they do and for some descriptive values
I got some 200-500 characters. I decided (and I don't know if I
won't be bombed for that) that if a client use more than 1024 chars
he will have to create additional field for it like addressline1,
adressline2, forcing him to use four fields would be stretching the
line :(
>the
>
> The memory consumption on the client would be directly related to
> programming environment and connectivity components used tointerface
> with FB's client DLL. Because, FB's client DLL is just a mediatorthe
> between the server and your application and doesn't store any of
> data retrieved from the server(that's your application's job).I am sending only results of the grouping some 300-1000 records at a
time and because the results are trimmed for transmission I should
have some 150 kB to transfer instead of 1 MB and it worry me not, I
do however care about the server memory requirements. The clients I
am facing have really low end linux machines.
and those 1000 records will be the agregation of some 10.000 which
transpose into 10 MB raw memory requirement per query if I
understand correctly. I expect some 20 simultaneus users doing (some
queue theory calculations would be handy) one query every 30
seconds.
This is on top of some other databse less straining datbase
operations.
> --CUIN Kaczy
> Best regards,
> Daniel Rail
> Senior System Engineer
> ACCRA Group Inc. (www.accra.ca)
> ACCRA Med Software Inc. (www.filopto.com)