More PGDBE questions

Use this forum for questions and answers regarding PostGreSQL and the PGDBE.
Message
Author
User avatar
rdonnay
Site Admin
Posts: 4801
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

More PGDBE questions

#1 Post by rdonnay »

1. Does PostGreSQL server support transaction processing using DacSession:beginTransaction() and DacSession:endTransaction() ?

2. I am trying to DbfUpsize on the remote server but get a license error Alaska. This works fine on my development computer at home.
How are licenses applied when running on the server?

3. What is the practical limit of rows in a table using ISAM with lots of indexes? I am upsizing a table with 1,126,156 records, 91 fields and 17 index tags.
PostGreSql server slows down to a crawl about half way through the upsize. I'm monitoring the numbers of added rows with PgAdmin 4 and after 664221 records
added it can add only about 60 records per minute. I expect that to slow to a stop in a few hours. This is being done with the server and upsizer running on my workstation.
Of course, this would be much faster with PostGreSql on a fast server, but I don't know how much faster. I am wondering if this process would speed up if I upsize the data
without the indexes first and then index later with the app. Regardless, it looks like this table will need to be replaced by multiple tables in the migration. This table
records taxi driver transactions (trips) and there are approximately 3000 trips per day for the drivers managed by our app. That's about 1.25 million records per year.
I am going to do the upsize again, but this time, with no CDX. We many need to develop a completely different strategy for how we handle this much data.
BTW - there is no performance issue at all with ADS. We have been able to maintain a many as 4,000,000 records with no penalty.
The eXpress train is coming - and it has more cars.

User avatar
rdonnay
Site Admin
Posts: 4801
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#2 Post by rdonnay »

I am wondering if this process would speed up if I upsize the data without the indexes first and then index later with the app.
I answered my own question.
The answer is NO.

My next strategy is to write my own upsizing routine.
My guess is that Alaska is using SQL INSERT statements to upsize the data.

I'm going to write a small program that does the same but I'm going to experiment with the size of the INSERT chunks.

It would certainly be helpful if I could get some of these questions answered by Alaska Software, but alas, I have not been approved to post anything on their forum.
I am highly frustrated over the lag time in trying to get any kind of support from any developers these days.
The eXpress train is coming - and it has more cars.

User avatar
SlavkoDam
Posts: 113
Joined: Wed Apr 27, 2022 10:12 am
Location: Negotin, Serbia
Contact:

Re: More PGDBE questions

#3 Post by SlavkoDam »

You should stop struggling with Alaska PGDBE, this is the road to nowhere. More deep you go, more problems and no solutions you will get. As soon as you understand and accept that, the better for you. You should use my PowerSql library and convert you ISAM app to true SQL app, in a very fast way and with minimal program changes.
There are free or trying tools for migration from dbf to any SQL database, working very fast and with no problems. You should use them instead of doing something that you don't know, and can't predict how it will work and what will be the results.
Best regards,

Slavoljub Damnjanovic
SD-SoftDesign, Alaska Software Technology Partner
https://www.sd-softdesign.com
https://www.sd-softdesign.rs

User avatar
Tom
Posts: 1225
Joined: Thu Jan 28, 2010 12:59 am
Location: Berlin, Germany

Re: More PGDBE questions

#4 Post by Tom »

I would like to disagree. Some service functions such as DbfUpsize and file maintenance are still very slow, but the rest works excellently. Since "remote filters" were added in late summer, we can now successfully finalize the migration of a hugely large application.
Best regards,
Tom

"Did I offend you?"
"No."
"Okay, give me a second chance."

skiman
Posts: 1198
Joined: Thu Jan 28, 2010 1:22 am
Location: Sijsele, Belgium
Contact:

Re: More PGDBE questions

#5 Post by skiman »

Hi Roger,
I am highly frustrated over the lag time in trying to get any kind of support from any developers these days.
I would like to help you, but I have no experience with PGDBE and I don't plan to do anything with it. I'm afraid it isn't used by a lot of developers, so there is almost no experience with it in the field. It looks to me as Tom is the only who succeeded to implement it rather succesfully?

It was announced in the Netherlands at Venlo in 2007. After 16 year they suddenly announced that there is a yearly licence fee for the use per user. I'm wondering how much developers are taking this route to move to SQL? I'm also surprised you are considering it? You have SQLquery developed as a tool and a lot of experience with it? Why would you go for the ISAM emulation? Do you expect it will take less development time to convert?
Best regards,

Chris.
www.aboservice.be

k-insis
Posts: 125
Joined: Fri Jan 28, 2011 4:07 am

Re: More PGDBE questions

#6 Post by k-insis »

You need to have pgdbe server configured properly. Table with 1.5M records is something sql shrugs off easily.

But: pgdbe by my account causes additional writes, not only on main table , but on special columns and calls stored procedures all the time for each insert , update, delete . Might be wrong as this was before covid epidemic.
so: One insert may be write to main table another two or three writes more for indexes and such.

So it can slow down to crawl.

> I'm going to write a small program that does the same but I'm going to experiment with the size of the INSERT chunks.
And you can open several threads at same time with same destination and distribute record insert across multiple threads so you will not be limited by single thread: "insert" -> wait for server reply -> next insert -> .... so you will be able to saturate network connection.

> I am trying to DbfUpsize on the remote server but get a license error Alaska. This works fine on my development computer at home.
How are licenses applied when running on the server?

This explains: they do count on server side. But it is flawed deeply as it scans all connects, so if you have appserver/webserver using same postgresql it might be off for server license count. LINK TO ILX:

https://ilx.alaska-software.com/index.p ... count.152/

> It would certainly be helpful if I could get some of these questions answered by Alaska Software, but alas, I have not been approved to post anything on their forum.

ILX forum is open onyl for paid subscribers in their customer database.
Send mail to Peter Bayer at info(at)alaska-software.com to clear status.

Old newsgroup is biologically dead and is not monitored for years.



Lg, Matej



rdonnay wrote: Thu Nov 14, 2024 8:44 am
I am wondering if this process would speed up if I upsize the data without the indexes first and then index later with the app.
I answered my own question.
The answer is NO.

My next strategy is to write my own upsizing routine.
My guess is that Alaska is using SQL INSERT statements to upsize the data.

I'm going to write a small program that does the same but I'm going to experiment with the size of the INSERT chunks.

It would certainly be helpful if I could get some of these questions answered by Alaska Software, but alas, I have not been approved to post anything on their forum.
I am highly frustrated over the lag time in trying to get any kind of support from any developers these days.

k-insis
Posts: 125
Joined: Fri Jan 28, 2011 4:07 am

Re: More PGDBE questions

#7 Post by k-insis »

This.

I honestly tried PGDBE but outside some longer testing decided not to use it ; it was slow, limited to single type of DBMS . One of things I missed at that time (just before or around Covid?) was lack of clear info how to to secure PGDBE connect (SSL require and client side security certificate) .

We switched to pure SQL around 2005-2006 , but with SqlExpress as link to ODBC and have been using it ever since. Apart for some headache due to alaska xpp not suppporting utf/unicode natively it plays allright 20y later too.

And in years I found there isn't that much work to actually go from ISAM to SQL . Some of it can be done automated with linux sed , grep and awk too.
skiman wrote: Fri Nov 15, 2024 2:41 am Hi Roger,
I am highly frustrated over the lag time in trying to get any kind of support from any developers these days.
I would like to help you, but I have no experience with PGDBE and I don't plan to do anything with it. I'm afraid it isn't used by a lot of developers, so there is almost no experience with it in the field. It looks to me as Tom is the only who succeeded to implement it rather succesfully?

It was announced in the Netherlands at Venlo in 2007. After 16 year they suddenly announced that there is a yearly licence fee for the use per user. I'm wondering how much developers are taking this route to move to SQL? I'm also surprised you are considering it? You have SQLquery developed as a tool and a lot of experience with it? Why would you go for the ISAM emulation? Do you expect it will take less development time to convert?

User avatar
rdonnay
Site Admin
Posts: 4801
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#8 Post by rdonnay »

I'm wondering how much developers are taking this route to move to SQL? I'm also surprised you are considering it?
You have SQLquery developed as a tool and a lot of experience with it? Why would you go for the ISAM emulation?
I have always been suspicious that PostGreSql ISAM emulation was not a feasible solution for migration of existing code, therefore I spent almost no time with it until Tom Liehr said that he was having success. I am a consultant to other programmers and I do not have a business application of my own. I write tools and libraries. I did this successfully with Clipper and, so far, have been quite successful with Xbase++. I helped many companies with large amounts of data migrate successfully to ADSDBE. I always felt that, as their data grew in size, their application would continue to perform. I have worked with DBF tables as large as 22gb with ADS. Due to it's great ISAM performance, there were only a few places in the code that it was necessary to use SQL statements. Those were in areas where a large amount of data needed to be added to a table, therefore I used SQL INSERT. The Medallion application inserts as many as 10,000 records per day in a few tables. This app has been running with no data problems for 25 years.

I have been asked, by my client, to help with a migration from ADSDBE to PGDBE, due to the fact that SAP is no longer selling ADS licenses after the end of this year.
Actually, I was looking forward to this because it offered some challenges.
Now, I am not convinced that this migration can succeed.
I have determined that data performance follows an exponential curve with PostgreSQL. The practical limit appears to be 500,000 records.
I am going to experiment with writing my own upsizer so it only exports the last 500,000 records.
If this works, then maybe I can write a data manager service that will move data around and store it as chunks in multiple tables.
I don't want to do that. Some of these apps require getting access to old data on a regular basis.

I may have to advise my 2 clients, who are asking for my help, to find another solution.

The Medallion program is a large amount of code, about 15mb of Xbase++ source code, all ISAM based.
Fortunately, ADS server will continue to work for the foreseeable future.
We use SQL to trim old data and send it to archives.
The eXpress train is coming - and it has more cars.

User avatar
rdonnay
Site Admin
Posts: 4801
Joined: Wed Jan 27, 2010 6:58 pm
Location: Boise, Idaho USA
Contact:

Re: More PGDBE questions

#9 Post by rdonnay »

I would like to disagree. Some service functions such as DbfUpsize and file maintenance are still very slow, but the rest works excellently.
It appears that once the data has been upsized, performance is acceptable, but it can take many days to get that old data into postGreSQL tables.
Therefore, it is a process that cannot be accomplished again and again while migrating the application.
Once the import is complete, there's no going back.

I cannot think of a workable strategy at this time until I understand all the ramifications. This will take a lot of experimenting to determine a feasible strategy.

If I have 1-2 years to do this, then in my experience, it will probably succeed.
Are my clients willing to pay for all this time? I don't know yet. It's not only my time, but their time as well.
The eXpress train is coming - and it has more cars.

User avatar
Auge_Ohr
Posts: 1428
Joined: Wed Feb 24, 2010 3:44 pm

Re: More PGDBE questions

#10 Post by Auge_Ohr »

hi Roger,

have you try to send more that 1 x INSERT to use full CACHE Power :?:
i can send more than 8 x INSERT using ";" in oPG:exec( cIns ) to speed up UPSIZE
greetings by OHR
Jimmy

Post Reply