Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Is SQL faster than SE11?

Former Member
0 Kudos

Hi guys,

i have a problem on using se11. Since the data in my table is more than 1 millard, when I use se11 or se16 to check how many data there is, it takes so long. Will it be faster if I user SQL Statement with ABAP

like

select max ()

from ...

If not, how can I get the information (how many data there is and so on)as quickly as possible?

Thanks for any hint!

Regards,

Liying

1 ACCEPTED SOLUTION

former_member194613
Active Contributor
0 Kudos

yes of course, the statement is o.k., but the trick (the improvement) will not work.

Siegfried

23 REPLIES 23

Former Member
0 Kudos

If there are millions of records , then write one SQL in a program and schedule that program in background and check the spool to check the no of records

select count(*) from ztablw.
write : "No of Records :" , sy-dbcnt.

value of sy-dbcnt can be seen in spool output

0 Kudos

Thanks Chandrasekhar Jagarlamudi,

this might be helpful. But I'd like to get a maximum number of one field. Should the "Max" be used, isnt' it? How can I use it as fast as possible?

0 Kudos

Hi ,

do u want to select the maximum value of a field, then you can try this

REPORT ZTEST.

DATA : V_CARRID LIKE SFLIGHT-CARRID.

SELECT MAX( CARRID ) FROM SFLIGHT INTO V_CARRID.

0 Kudos

Hi,

thanks, guys.

My problem is I have a table with more than 1,000,000,000 data. It take so long to get my expected information by using

SELECT MAX...

Is there any more efficient way?

former_member194613
Active Contributor
0 Kudos

How many records do you have in your table, 1.000.000 or 1.000.000.000????

The maximum works in the same way as said, use max(field) instead of count(*).

There is no way to get the maximum by SE11 or SE16.

Siegfried

former_member194613
Active Contributor
0 Kudos

in principle there is not better solution, 10e+9 (one billion) is extremely large.

Question: How important is your query in relation to other queries and changes going on on this tables. Tables become obviously only that large if there are a lot of inserts. The inserts are probably much more important than your query.

If not, then you could create an index with your field as first column.

And you need to know a reasonably large start-point.

select max (field) from table

where field > field_0.

could work faster. But you must be aware that the index costs database size and performance (inserts are slower). If your query is exeuted onyl once in a while, then it is better not to optimize it.

Check where data archiving is not the much better option!

0 Kudos

Thanks Siegfried,

the solution about "index" can not be used since it causes the performance problem during "Insert", as you said.

So from your opinion, it is not possible to improve it, isn't it?

Regards,

Liying

alejandro_bindi
Active Contributor
0 Kudos

If you need the max value for all table entries, there's no way to speed it up except maybe creating an index on that column (depending on it's type and amount of possible values, that may help or not). How many possible values does that column have? If it's a date field, an index may help, if it's a gender field (M/F), there's no use in creating it.

Otherwise, it will always be a full table scan (which has a big performance cost).

regards

0 Kudos

Thanks Alejandro Bindi ,

I think I got the answer: it is not possible to improve the performance except to create an index. Although it is a sad news, thanks for your help so much.

Regards

0 Kudos

I don't know if this is faster than SELECT MAX, but you can give it a try:


REPORT ztest LINE-SIZE 80 MESSAGE-ID 00.

TABLES: bsis.

SELECT dmbtr
  FROM bsis
  INTO bsis-dmbtr
  UP TO 1 ROWS
  ORDER BY dmbtr DESCENDING.
  WRITE: /001 bsis-dmbtr.
ENDSELECT.

Later, I ran a small test of this code against SELECT MAX with the performance trace turned on. The results (of the EXPLAIN) were almost identical.

However, for aggregate functions (like MAX) buffering is turned off, so my approach may be slightly faster.

Anyway, try it and see.

Rob

Message was edited by:

Rob Burbank

0 Kudos

Thanks Rob.

I will try it, but I think since the data is too much, it is difficult to win performance...

But anyway thanks.

former_member194613
Active Contributor
0 Kudos

Hi,

the trick with the up to 1 rows plus order by will not work, because the order by must be processed first, and then there is any way only one record coming back.

Check data archiving, one billion records is anyway extremely large for a table, usual large table have 1 to 10 million records.

Siegfried

0 Kudos

Hi Siegfried,

what do u you mean "willl no work"? Do you mean the performance will not be improved, don't you? Because it works actully, but until now I am not able to transport to producitve system to test the performance.

I know the data is too much, and the reason is our data model. Until our data model will be changed, I am afraid we have to keep these data.

Thanks

0 Kudos

Siegfried - I tested after I posted. The results were almost identical. Can you show an example where it doesn't work?

Rob

0 Kudos

I think what he meant is that it won't enhance performance neither, i.e. optimization won't work, the query is valid.

Regards

former_member194613
Active Contributor
0 Kudos

yes of course, the statement is o.k., but the trick (the improvement) will not work.

Siegfried

0 Kudos

Well, the "trick" was that buffering is turned off for the aggregate function, but I don't think it is for the ORDER BY. My thinking is that it might give a slight statistical improvement.

My testing didn't show any, but it will be up to the original poster to decide how to proceed.

Rob

0 Kudos

Thanks u guys!

I will post the result here if I am possible to use Robs' suggestion.

0 Kudos

Yes Rob, buffering is off for ORDER BY:

http://help.sap.com/saphelp_nw04/helpdata/en/aa/4734a00f1c11d295380000e8353423/content.htm

<b>The SELECT statement bypasses the buffer when you use any of the following:

  • The BYPASSING BUFFER addition in the FROM clause

  • The DISTINCT addition in the SELECT clause

  • Aggregate expressions in the SELECT clause

  • Joins in the FROM clause

  • The IS NULL condition in the WHERE clause

  • Subqueries in the WHERE clause

  • The ORDER BY clause

  • The GROUP BY clause

  • The FOR UPDATE addition</b>

No harm to try tough.

Regards

0 Kudos

OK - that's it then. I thought this might be the case, but didn't see it in the documentation.

<i>Mea culpa.</i>

Rob

0 Kudos

Liying,

I know this thread is closed but I'm curious just what kind of data might exceed 1 billion rows? We are a (smaller) financial institution and our FI document items (cluster RFBLG) is approaching 100 million rows, and I thought that's a lot.

Btw, to quickly check the number of records of such a beast across all clients, I am using transaction DB20 (provided the DB statistics are up to date).

Cheers

Thomas

0 Kudos

Hi Thomas,

We are dealing with PP and load data to BI. Since our data requieres starting point, and have to be calculated based on arbitary point. These calculation leads to amount of data records.

I have no permission to execute transaction DB20. Thanks though.

Regards,

Former Member
0 Kudos

write select query and process it in background