More PGDBE questions

Use this forum for questions and answers regarding PostGreSQL and the PGDBE.
Message
Author
User avatar
rdonnay
Site Admin
Posts: 4804
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#21 Post by rdonnay »

Where to send the sample to?
Jimmy –

Thank you for the offer, but I think I made it clear on the forum that I am not interested in using any third-party software at this time.

There are 2 projects that I am working on for clients.
Both projects are very large and have been running for about 30 years.
The code is all ISAM based and therefore the clients are only interested in an ISAM solution.
I have been asked to evaluate PGDBE, not other products.

The 2 projects I am evaluating for migration have both been in existence for 30 years.
One of them has 634,530 lines of source code with 3 installations and dedicated servers.
The other has 159,071 lines of source code and 1200 installations in peer-to-peer networks with no dedicated server, not including an average of 135 custom data-driven reports that include ISAM source code.

What projects have you worked on that are fully ISAM based that you have converted to PostGreSql (without PGDBE) while still using 98% of the ISAM code?

Why do you make recommendations and judgements without ever asking any questions?
The eXpress train is coming - and it has more cars.

User avatar
rdonnay
Site Admin
Posts: 4804
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#22 Post by rdonnay »

Tom -

Are indexes maintained correctly if records are added to a table using a SQL INSERT vs dbAppend() with PGDBE in ISAM mode?
Also, can I create a new table using SQL CREATE TABLE and expect it work in ISAM mode?

The reason I am asking these questions is because I need to know how compatible SQL statements and ISAM database functions are going to be going forward?
Are there known issues?

I have been doing more experimenting with upsizing and was not happy with the time it took to upsize one of the tables.
The entire upsizing project took 7.69 hours.
Only 1 of the 120 tables accounted for 6.56 hours - XMLTKT.DBF.

As an experiment, I broke that DBF table into 3 segments and did the upsizing again. - XMLTKT1.DBF, XMLTKT2.DBF, XMLTKT3.DBF
The total time to upsize the three files was reduced from 6.56 hours to 2.46 hours.
I am trying to determine if it is faster to combine those 3 files as follows:
SELECT * INTO XMLTKT FROM XMLTKT1 ;
INSERT INTO XMLTKT FROM XMLTKT2 ;
INSERT INTO XMLTKT FROM XMLTKT3 ;

I want to try to break up the upsizing process to bring in the most current data, so the app can be used quickly, and then have an automated process do the remaining upsizing in the background while the app is running.

Here's an update: I tried doing the inserts with PostGreSql but it won't let me. I did this kind of thing often with Advantage Server to archive data. - Bummer!
The eXpress train is coming - and it has more cars.

User avatar
Tom
Posts: 1230
Joined: Thu Jan 28, 2010 12:59 am
Location: Berlin, Germany

Re: More PGDBE questions

#23 Post by Tom »

Roger -

in Alaska's own "ILX" forum, there is an article about changing table structures using ALTER TABLE (don't do that for columns used in indexes!). But there is nothing about CREATE, INSERT, UPDATE or DELETE - I believe, using those commands would kill the structure needed for the PGDBE. We use commands to erase tables (since we found out which service tables need to be maintained). But we strictly stay at the navigational functions and commands for almost everything that concerns data maintenance. We only use SQL for some informational stuff, for retrieving data in some situations (into data objects/arrays) and a few other functions.
Best regards,
Tom

"Did I offend you?"
"No."
"Okay, give me a second chance."

User avatar
rdonnay
Site Admin
Posts: 4804
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#24 Post by rdonnay »

We only use SQL for some informational stuff, for retrieving data in some situations (into data objects/arrays) and a few other functions.
Thanks for your reply. I will check out ILX.
The eXpress train is coming - and it has more cars.

User avatar
Tom
Posts: 1230
Joined: Thu Jan 28, 2010 12:59 am
Location: Berlin, Germany

Re: More PGDBE questions

#25 Post by Tom »

https://ilx.alaska-software.com/index.p ... nance.126/

(Looks like ALTER TABLE is safe for columns used in indexes aswell.)
Best regards,
Tom

"Did I offend you?"
"No."
"Okay, give me a second chance."

User avatar
PedroAlex
Posts: 235
Joined: Tue Feb 09, 2010 3:06 am

Re: More PGDBE questions

#26 Post by PedroAlex »

About the PgSql server performance , it is important to have a correct and appropriate configuration.
Maybe this topic can help.
https://ilx.alaska-software.com/index.p ... ential.64/
Pedro Alexandre

User avatar
Tom
Posts: 1230
Joined: Thu Jan 28, 2010 12:59 am
Location: Berlin, Germany

Re: More PGDBE questions

#27 Post by Tom »

Maybe this topic can help.
This can't be mentioned too often. :)
Best regards,
Tom

"Did I offend you?"
"No."
"Okay, give me a second chance."

User avatar
rdonnay
Site Admin
Posts: 4804
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#28 Post by rdonnay »

This can't be mentioned too often.
No matter how often it is mentioned, there seems to be no configuration that performs sufficiently in my tests.
The application I am working with uses 6 different UDF's in indexes and they are a huge bottleneck to performance.

I used ChatGPT to convert those UDF's to plpgsql

They work excellently when using SQL SELECT to improve performance of a data query, however when using dbEval() for the query it uses the original UDF index.
A SQL query will take less than 1 second, where the ISAM dbEval() takes 15 minutes.

I need to determine if there is a way I can use the plpgsql function instead of that UDF in dbEval().
The eXpress train is coming - and it has more cars.

User avatar
Tom
Posts: 1230
Joined: Thu Jan 28, 2010 12:59 am
Location: Berlin, Germany

Re: More PGDBE questions

#29 Post by Tom »

I assume the <bForCondition>-codeblock in DbEval() is executed locally, what means the engine retrieves all records and evaluates the codeblock on the result set, which contains all records. So imho you can't get stored functions working here. You are already working on a result set. This might get very slow with large tables.
Did you take a look at local/remote filters?
Best regards,
Tom

"Did I offend you?"
"No."
"Okay, give me a second chance."

User avatar
rdonnay
Site Admin
Posts: 4804
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#30 Post by rdonnay »

Did you take a look at local/remote filters?
I haven't completely evaluated what gets passed to dbEval() in Terry Wolfe's reporting system.
They have hundreds of reports that build an array of statements that are pre-processed from his own reporting language into array of statements which will later be macro-compiled and executed. My goal is to see how possible it will be to convert the dbeval() parameters into a SQL QUERY. My guess is that most if it is done through filtering but a believe there are also a lot of seeks on the UDF indexes. I don't have enough information yet, but I will keep you informed.
The eXpress train is coming - and it has more cars.

Post Reply