ISTech Support Forum
http://www.istechforum.com/YaBB.pl
Crystal Reports, ODBC & Access >> Crystal Reports General Issues >> Version 11 very slow?
http://www.istechforum.com/YaBB.pl?num=1189000893

Message started by David Waldmann on 09/05/07 at 07:01:33

Title: Version 11 very slow?
Post by David Waldmann on 09/05/07 at 07:01:33

At first, on some reports it was taking so long to refresh data that I thought something must be broken, and I would end up using Task Manager to kill it.

For instance, I have a report I use for commission that uses BKARCUST, BKARHINV, BKARHIVL & BKICMSTR, using ODBC. In version 8 it would take about 5 seconds to display one months worth of data. In v11 it's taking minutes - I haven't timed it, but probably close to 10. Most of the time is spent "Accessing Database".

Do I need to set something up differently?

Title: Re: Version 11 very slow?
Post by kkmfg on 09/05/07 at 08:51:56

You might keep an eye on the network usage while the report is running. If little data is passing around then something weird is going on. Is the CPU utilization quite high when the report generates? Also, is it possible that for some reason v11 is requesting the entire file instead of just requesting the month's worth of data? Watching network utilization could help diagnose that too.

Title: Re: Version 11 very slow?
Post by David Waldmann on 09/05/07 at 09:34:31

Just ran it again.

CPU utilization 2-6%
Network utilization 0-0.75%

Title: Re: Version 11 very slow?
Post by kkmfg on 09/05/07 at 12:05:16

That sounds pretty lazy. Maybe you need to give your computer a pep talk or a raise or something. ;-)

Within DBA (I never did get CR)
If I run DE-A and export an inventory list with part #, price, and desc I get about 1% utilization of the network. If I output every field I get about .3% utilization. But that's probably because it's an interpreted language. DBA never gets anywhere close to using the network link here. (We've even got 2.4Ghz P4 machines!)

As for CR... Well, it's a bit odd that one version is so slow and the other is not. Have you tried running the same or similar report in both one after the other? Does v8 still run much faster than 11? Maybe there is some sort of driver issue? It seems like there should not be as both 8 and 11 should use the same Pervasive ODBC driver. Have you tried designing a report from scratch in v11 to see if maybe there is some sort of weird glitch when importing V8 reports?

I guess all I can do is try to help brainstorm... Hopefully it helps to get you on the right track...



David Waldmann wrote:
Just ran it again.

CPU utilization 2-6%
Network utilization 0-0.75%


Title: Re: Version 11 very slow?
Post by David Waldmann on 09/05/07 at 13:03:30


kkmfg wrote:
That sounds pretty lazy. Maybe you need to give your computer a pep talk or a raise or something. ;-)

I'm not sure what you're getting at. It seems like you are reporting just about the same thing I did. Unless I am misinterpreting?



kkmfg wrote:
Have you tried running the same or similar report in both one after the other? Does v8 still run much faster than 11?

I first opened the v8 report in v11 and it took a long time, I saved it in v11 and then opened the original in v8 and it was still fast. I then refreshed it in v11 again and it was still a long time.



kkmfg wrote:
Have you tried designing a report from scratch in v11 to see if maybe there is some sort of weird glitch when importing V8 reports?

Haven't tried that yet. Maybe I'll give it a go if I can't figure anything else out.

Title: Re: Version 11 very slow?
Post by kkmfg on 09/06/07 at 07:33:10


David Waldmann wrote:
[quote author=kkmfg link=1189000893/0#3 date=1189022716]That sounds pretty lazy. Maybe you need to give your computer a pep talk or a raise or something. ;-)

I'm not sure what you're getting at. It seems like you are reporting just about the same thing I did. Unless I am misinterpreting?

Quote:
Just joking really...

Although, if the bottleneck were TAS you'd expect to see the CPU usage a lot higher. If the bottleneck were network related you should have at least 20% network utilization. Since both your CPU and your network utilization were nearly non-existant we need to look for another cause. As I had mentioned before, you could try making a new report in v11 and see what happens. if that doesn't work then I'm not sure what is going on. I know how you could find out but it's somewhat drastic. You *could* download AMD CodeAnalyst. It is a full system profiler and will tell you what the system spent it's time on. Assuming most things have debugging symbol tables you should even see the function names. The TAS7 runtime is however encrypted and has debugger traps so I do not know how well a profiler would like it. I do, however, believe that it has a symbol table. However, unless you've ever done something like that before it's probably not something you want to get into now. At any rate it's pretty obvious that your system is sitting around idle most of the time (in V11) for seemingly no reason at all. The key is to figure out what it's waiting for. The above discussion on system profiling is one way to try to get to the bottom of that.

[quote]

kkmfg wrote:
Have you tried running the same or similar report in both one after the other? Does v8 still run much faster than 11?

I first opened the v8 report in v11 and it took a long time, I saved it in v11 and then opened the original in v8 and it was still fast. I then refreshed it in v11 again and it was still a long time.



kkmfg wrote:
Have you tried designing a report from scratch in v11 to see if maybe there is some sort of weird glitch when importing V8 reports?

Haven't tried that yet. Maybe I'll give it a go if I can't figure anything else out.


Title: Re: Version 11 very slow?
Post by David Waldmann on 09/06/07 at 07:53:25


kkmfg wrote:
You *could* download AMD CodeAnalyst. It is a full system profiler and will tell you what the system spent it's time on. Assuming most things have debugging symbol tables you should even see the function names. The TAS7 runtime is however encrypted and has debugger traps so I do not know how well a profiler would like it. I do, however, believe that it has a symbol table. However, unless you've ever done something like that before it's probably not something you want to get into now.


Hmmm, yeah that sounds a bit over my head.

Thanks for the clarification and thoughts though.

Kevin is taking a look at my v8 report through his v10. Will let you know if he/we find anything.

Title: Re: Version 11 very slow?
Post by kevind on 09/06/07 at 12:36:50

The Root cause problem with this report is that the Invoice Date field in BKARHINV is not Indexed.

Lynn / Allan - Could an index for this field be added to this table?

I do not know why the V8 of this report ran so fast.  Both are accessing the DBA data with ODBC.

By using INVTXN (where invoice date IS indexed), the report can be made to run very fast.

Title: Re: Version 11 very slow?
Post by kkmfg on 09/06/07 at 12:52:43


kevind wrote:
The Root cause problem with this report is that the Invoice Date field in BKARHINV is not Indexed.

Lynn / Allan - Could an index for this field be added to this table?

I do not know why the V8 of this report ran so fast.  Both are accessing the DBA data with ODBC.

By using INVTXN (where invoice date IS indexed), the report can be made to run very fast.


Hmmm, beats me why one would run so much faster than the other in that case.

What I do know is that you pay a performance penalty in both database size and write speed when you add a bunch of indices. If the index isn't absolutely necessary then it's up for debate whether it should really be added. If the necessary data can all be culled from invtxn instead then maybe it would be best to query that table instead and leave bkarhinv alone.

Title: Re: Version 11 very slow?
Post by kevind on 09/06/07 at 13:35:24


Quote:
What I do know is that you pay a performance penalty in both database size and write speed when you add a bunch of indices. If the index isn't absolutely necessary then it's up for debate whether it should really be added. If the necessary data can all be culled from invtxn instead then maybe it would be best to query that table instead and leave bkarhinv alone.


In the context of this table (BKARHINV) ...

1. With Multi Gig Drives, database size is not a factor (for us anyway)

2. Write Speed would only be a factor if thousands of records were inserted in an operation.  Invoicing does not involve Thousands of records at a time.

3. "Stealing" the Invoice date index from INVTXN is very inefficient because it adds a level of joining burden on the Relational Engine and for each unique invoice number, there are several records in this table with this number (one for each line item) that the relational engine must throw out (wasted time).
The Best SQL is simple SQL (but not too simple! - Albert Einstein  ;)  ).


Title: Re: Version 11 very slow?
Post by kkmfg on 09/06/07 at 13:40:03


kevind wrote:

Quote:
What I do know is that you pay a performance penalty in both database size and write speed when you add a bunch of indices. If the index isn't absolutely necessary then it's up for debate whether it should really be added. If the necessary data can all be culled from invtxn instead then maybe it would be best to query that table instead and leave bkarhinv alone.


In the context of this table (BKARHINV) ...

1. With Multi Gig Drives, database size is not a factor (for us anyway)

2. Write Speed would only be a factor if thousands of records were inserted in an operation.  Invoicing does not involve Thousands of records at a time.

3. "Stealing" the Invoice date index from INVTXN is very inefficient because it adds a level of joining burden on the Relational Engine and for each unique invoice number, there are several records in this table with this number (one for each line item) that the relational engine must throw out (wasted time).
The Best SQL is simple SQL (but not too simple! - Albert Einstein  ;)  ).



#1: I suppose that's true.
#2: Yes, the index must be updated on every write but if you are doing very few writes then the performance drop will be very small.
#3: Yes, in that case I see your point.

So forget what I said. ;-)

Title: Re: Version 11 very slow?
Post by dameng on 09/08/07 at 09:36:28

Kevin,
since the data is being accessed thru the ODBC, in either version, you can edit the Table definition in Pervasive and create your own index.

hth

Title: Re: Version 11 very slow?
Post by kevind on 09/08/07 at 10:16:58

Yes, I have done this with our in house applications that use Pervasive.

However, I believe that any changes to the database would be undone if ODBCDDF is run again.
ODBCDDF may need to be modified in order to make the new indexed field STICK.
The only reason to run ODBCDDF more than once is to incorporate changes to the ODBC database when changes happen in the Btrieve Tables.
I already have a .SQL script I run on the .DDF's created by ODBCDDF to enable security and define views that I use in some reports.

Since the TAS programs are so tightly bound to the Btrieve Tables, I assumed that my additional index would end up getting bounced.
(Collin ... I know "you hate Btrieve / TAS"   ;D )

Allan / Lynn ... Is this true, or can I add indexes to my tables ???

Title: Re: Version 11 very slow?
Post by kkmfg on 09/09/07 at 08:38:44

When you use ODBC you are just using the relational engine to access the btrieve data. I would highly doubt that adding an index in ODBC could be possible if it did not also add the index in the btrieve files. An index, after all, should be stored on disk if it's to do any good. Why would PervasiveSQL create an index only for ODBC? My guess is that you can add indices to a btrieve table but that the original program won't necessarily use the index.

Title: Re: Version 11 very slow?
Post by kevind on 09/09/07 at 18:34:17


kkmfg wrote:
When you use ODBC you are just using the relational engine to access the btrieve data. I would highly doubt that adding an index in ODBC could be possible if it did not also add the index in the btrieve files. An index, after all, should be stored on disk if it's to do any good. Why would PervasiveSQL create an index only for ODBC? My guess is that you can add indices to a btrieve table but that the original program won't necessarily use the index.


Yes, That is true.  The index is embedded in the table (unless you create an Index only file).  I do not believe that the
TAS applications will not use it because they are not written to access any invoice data by date.  In programs that allow
you to specify an invoice date, the program logic will compare dates of records returned and ignore any that do not fall in the specified range.
This is an example of some of the old junk code that ISTECH support has to deal with.  Works great on the Demo data that has 12 records in the
file, but is not practical for real datasets.

My concern was regarding the ODBCDDF.RUN program that creates the ODBC compatible DDF files.  
If it is not modified, then when it creates the INDEX.DDF file, it will not include the definition for any new indexes that have been
created (and embedded) in the tables.

Now, having said that, it is only necessary to run ODBCDDF.RUN when there is a change to the table definition.  This does not happen
very often.  It would be nice if I did not lose the newly created index when it becomes necessary to run ODBCDDF.RUN in the future.

Title: Re: Version 11 very slow?
Post by kkmfg on 09/10/07 at 06:32:36

Ahh... I see what you mean. Yes, the index.ddf file may not include the new index and so you'd have to add it back.

Title: Re: Version 11 very slow?
Post by kkmfg on 09/10/07 at 13:21:31


kevind wrote:
Now, having said that, it is only necessary to run ODBCDDF.RUN when there is a change to the table definition.  This does not happen
very often.  It would be nice if I did not lose the newly created index when it becomes necessary to run ODBCDDF.RUN in the future.


Well, since PervasiveSQL makes file.ddf, field.ddf, and index.ddf into tables it should be possible to store SQL commands that add the indices back in (along with your security additions and views.) Of course, then you'd have trouble if they ever did decide to add the index in themselves. In a perfect world the Evo updater would scan the file, field, and index tables for things it needs to add or change and leave the rest alone. I'm not sure what it currently does when it has to update.

Title: Re: Version 11 very slow?
Post by NovaZyg on 09/10/07 at 14:16:03

The Evo update, does just that. It looks to see what you have in your dictionary, and compares it to the xfr files that I supply (my Dictionary) if it finds new stuff in mine it adds it. If you have new stuff it ignores it and just leaves it there.
If I were to add the index to the update then if I used the same name as you did, it would just leave your stuff alone, but if i had a different name (and values/fields) for the index it would create mine.

;)

As for the DDF's I have nothing to do with that stuff. There are 2 utils that were provided by DBA way back when.. UT-J and ODBCDDF that read the current DBA btrieve dictionary and create the ODBC Files. I do not have the source to either of those programs so I do not know what exactly they do.
::)

Title: Re: Version 11 very slow?
Post by kevind on 09/10/07 at 17:57:08

Alan,
Are you saying that if you change the DBA btrieve dictionary for Classic (if there is a database change) then ODBCDDF will create the ODBC compatible .DDF files from the Btrieve Dictionary??

If so, then is it correct that it is only necessary to add any new index definitions in the Btrieve Dictionary to make them 'Stick' when ODBCDDF is run??

If so, then I would lobby for the Invoice Date field in BKARHINV be indexed.

Title: Re: Version 11 very slow?
Post by NovaZyg on 09/11/07 at 08:14:18

Kevin,

 Yes, that is exactly what I am saying. But the downside is that if I add it then on our next update it will have to create this index for everyone, and the ARH files tend to be BIG files so it will take time..  If you can get all the users to agree to it I will add it.. Also what do you want, just the INVDTE or INVDTE+INV.NUM or...?

Title: Re: Version 11 very slow?
Post by kkmfg on 09/11/07 at 09:05:56


NovaZyg wrote:
Kevin,

 Yes, that is exactly what I am saying. But the downside is that if I add it then on our next update it will have to create this index for everyone, and the ARH files tend to be BIG files so it will take time..  If you can get all the users to agree to it I will add it.. Also what do you want, just the INVDTE or INVDTE+INV.NUM or...?


One thing that irks me about btrieve is the lack of bitmap indexing. In postgresql you can index individual columns and if you ask for an index that involves multiple columns it combines the one column indexes to build a master multicolumn index. Quite nice...

Anyway... Unless there are huge numbers of invoices a day I would think that indexing by just the date would be fine. So that's my vote.

Title: Re: Version 11 very slow?
Post by kevind on 09/11/07 at 12:33:11


kevind wrote:
The Root cause problem with this report is that the Invoice Date field in BKARHINV is not Indexed.

Lynn / Allan - Could an index for this field be added to this table?

I do not know why the V8 of this report ran so fast.  Both are accessing the DBA data with ODBC.

By using INVTXN (where invoice date IS indexed), the report can be made to run very fast.


Further investigation reveals that I made more than 1 change when I used INVTXN to provide a Invoice date index.
I also changed the order of linking in the report.

So here is the bottom line:

No Invoice date index:
8 minutes to run with original table links
7 seconds to run with revised links (BKARHINV as root table instead of BKARCUST)

With an Invoice date index:
1 Minute 30 seconds with Original table Links
2 seconds to run with revised links

Other Times
Time to Add Index through Table Designer: 10 Minutes (72Meg file) (Pervasive had to expand the file a couple of Meg and allocate Index pages)
Time to remove index through Table Designer: 20 seconds
Time to Add Index again through table designer: 14 seconds (internal index pages were already allocated, just needed to re-index 49650 records)

ISTech Support Forum » Powered by YaBB 2.1!
YaBB © 2000-2005. All Rights Reserved.