cancel
Showing results for 
Search instead for 
Did you mean: 

Update Statistics - SAP ASE

Former Member
0 Kudos

Dear Colleagues,

One of the scheduled UPDATE STATISTICS job, failed to run.

Hence I was trying to execute it  manually.

1> update index statistics 'S30.SAPSR3./BI0/F0SMD_PE2H'

2> go

Msg 102, Level 15, State 181:

Server 'S30', Line 1:

Incorrect syntax near 'statistics'.

My table name is S30.SAPSR3./BI0/F0SMD_PE2H. Though it looked very simple, but i spend a day, with all possible ways to get rid of the syntax error, but all in vain. I wanted to run with sampling option so as to improve the time consumption.

Please give me some helping hand on this. I am running on ASE 15.7 , SP134.

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Ajith,

Please use following commands


1> set quoted_identifier on
2> go
1> update index statistics SAPSR3."/1CN/CPASAP00001"
2> go
1>


If you have any doubt refer following sap note:-


1757691 - SYB: How to manually update index statistics of a table using isql


Regards

Anand

Former Member
0 Kudos

Thanks a lot Anand. That worked !!!

I was wondering, why it did not improve the value of datachange() function even after running update stats successfully. The value still looks very unhealthy.

1> set quoted_identifier on

2> go

1> update index statistics SAPSR3."/BI0/F0SMD_PE2H" with sampling = 2 percent

2> go

1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)

2> go

---------------------------

                48377.814731

(1 row affected)

1> update all statistics SAPSR3."/BI0/F0SMD_PE2H" with sampling = 2 percent

2> go

1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)

2> go

---------------------------

                50181.052529

1> sp_recompile 'S30.SAPSR3./BI0/F0SMD_PE2H'

2> go

Each stored procedure and trigger that uses table 'S30.SAPSR3./BI0/F0SMD_PE2H' will be recompiled the next time it is executed.

(return status = 0)

1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)

2> go

---------------------------

                50181.052529

(1 row affected)

Any idea about this?

Former Member
0 Kudos

Ajith,

Good to know that you were able to address the issue.

Coming to your next question i believe...there is a difference between

Update index statistics and Update All statistics.Hence there is a difference in value.

Regards

Anand

Former Member
0 Kudos

True Anand, but the point where I am concerned is the datachange() value ( for the table) should have ideally come down ( after the update stats ran successfully) . Whereas it did not. Why didn't it....

I still get a feeling, It was unable to write statistics for that object.

Former Member
0 Kudos

Ajith,

Usually The update stats is executed when the datachange value is greater than 5% in order for us to see the difference.

I believe if the values does not change the datachange value could be lesser than 5%.

Hence we  dint not observe any difference.

Regards

Anand

Former Member
0 Kudos

Dear Anand and other dear colleagues


My datachange() shows 50181.052529% , after running Update stats without errors. What does it actually mean? Following are my concerns.


1.) Does Update Statistics reset the datachange() ?

2.) What else other than ( moddate, datachange() ) can confirm the necessity of Update Stats ?


Kind Regards,

Ajith Prabhakaran



former_member182259
Contributor
0 Kudos

datachange was meant to be run on a per column per partition basis.   Running it on the entire table results in grossly aggregated percentages as you have seen.   Were I you, I would write a query against syspartitions joined with syscolumns with datachange() to see which columns/partitions are the ones most likely tripping the issue.  

Also, by running update all statistics, you likely caused some real fun for yourself.    ASE's optimizer can use stats on unindexed columns to cost row estimates as part of the decision making process about which way to join tables, etc.   So, sometimes this is done on columns that typically don't index well (extremely low cardinality such as gender) especially if highly skewed so that ASE picks the best plan based on better row estimates.   However, unless you keep these updated, you will potentially wreak havoc on queries that hit old/stale stats.   You might want to use delete statistics to clear these out and then run update index statistics again.

kevin_sherlock
Contributor
0 Kudos

Not sure about your update stats and datachange values.  Make sure you run "sp_flushstats" just to see if any in-memory value isn't making it to disk or some such problem.  Perhaps sampling is causing your datachange values not to be reset.   Also, is this table partitioned (ie, does it have more than 1 partition)?

Keep in mind a few things:

     1.  datachange(<tablename>,null,null) returns the maximum datachange value over only those columns which have histograms (statistics). 

    2.  datachange function should probably be only used to determine which tables/partitions do NOT need updated stats.  (in my opinion)

    3.  Running update stats on a global index of a partitioned table will not reset datachange values.

    4.   Determining WHEN to run update stats is more of an art form, depending on your specific environment.  There are no hard, standard rules to apply here to say a table/column _needs_ update stats.  Some use datachange, while others use the various "derived_stat()" numbers, etc.  Most shops simply chose to opt-in a group of tables to have update stats job run on a regular basis.

To take a closer look, run this SQLto see where your values are coming from:

set quoted_identifier on

select ObjectName=convert(varchar(20),object_name(p.id))

      ,PartName=convert(varchar(20),p.name)

      ,ColName=convert(varchar(20),c.name)

       ,HistDt=convert(varchar(10),s.moddate,101)

       ,Smp_Pct=str(s.c13,8)

       ,Datachg=datachange("SAPSR3."+object_name(p.id),p.name,c.name)

from syspartitions p INNER JOIN syscolumns c

                      ON p.id = c.id

                        and p.indid between 0 and 1

                      LEFT JOIN sysstatistics s

                      ON c.id = s.id

                       and p.id = s.id

                       and s.formatid = 100

                       and s.statid = 0

                       and s.sequence = 1

                       and convert(varbinary,c.colid) = s.colidarray

                       and datalength(s.colidarray) <= 2

where object_name(p.id) = '/BI0/F0SMD_PE2H'

order by object_name(p.id),p.name,c.colid

Columns which have histograms will show up with a "HistDt" value of the last date update stats were done for the column.  For each of these types of columns, the "max" value of those will be what is returned by "select datachange(...., null,null) "

Answers (2)

Answers (2)

Former Member
0 Kudos

Dear Colleagues,

As said. This table is having 650 partitions ,. The index size being 250 gigs and data size 10 gigs.


Since the update statistics do not look like doing any much better on the statistics. Would you recommend to delete statistics, and then run the update all statistics on this table.

My concerns:

How bad would this affect the business? ( During the process of delete statistics and re run of update statistics )

Which Update Statistics command would the best suit, to run here , in that case. ( After delete statistics )

My schedule ( dbacockpit) jobs often, give the following error stating unable to write statistics for this table.

"0:0010:00000:00011:2015/12/28 03:45:31.76 server  Unable to write statistics for object 1043999194 in database 4. Please run update statistics on this table after loading this database or prior to accessing the table in the loaded database."

Please Advise.

Kind Regards,

Ajith Prabhakaran

Former Member
0 Kudos

Dear Colleagues,

An Update :-

I deleted the statistics for the respective table (This table is having 650 partitions ,. The index size being 250 gigs and data size 10 gigs.)

And then Ran the Update All Statistics for the table to recrate the statistcis. It took around 2 minutes to complete. Now I do not get any sort of messages in errorlogs. ( such as : Unable to write statistics for object 1043999194 in database 4. Please run update statistics on this table after loading this database or prior to accessing the table in the loaded database.).

Following are the results:-

1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)

2> go

---------------------------

                47299.770467

1> set quoted_identifier on

2> go

1> delete statistics SAPSR3."/BI0/F0SMD_PE2H"

2> go

1> update all statistics SAPSR3."/BI0/F0SMD_PE2H"

2> go

1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)

2> go

---------------------------

                    0.000000

datachange is of course meant to be run on a per column per partition basis.   Perhaps running it on the entire table give me grossly aggregated percentages. Still the value has now changed and looks very much satisfying.


Regards,

AJ

victoria_normand
Contributor
0 Kudos

Hi AJ,

The mentioned object comes from a BW Infocube.
Please keep in mind that for BW environments (used on Solman also) we do not recommend to run statistics manually. The statistics should be updated in a Process Chain. BW code adapted for ASE will determine how to perform the update stats.

So basically the supported ways to update statistics on a BW Infocube's fact tables and dimension tables are:

1. via RSA1->"Infocube"->Manage->Performance->Refresh Statistics

OR

2. via "Refresh Statistics / "Construct DB statistics" step in the RSA1 - Process Chain definition, mainly after a DTP.


You may use the report RSSYBREFRESHINFOCUBESTATISTICS at InfoCube level (be sure to have the last version of SAP Note 2138611 - SYB: Better table statistics for star schema tables of an InfoCube implemented).


Kind regards,

Victoria.

Former Member
0 Kudos

Thanks Victoria. DBACOCPIT is what, is used to schedule and run the update statistics maintenace job here. Are you mentioning something similar?

victoria_normand
Contributor
0 Kudos

Hi Ajith,

ATM from DBACockpit should also be used of course, but for "manual" update stats we recommend to follow the options mentioned previously, not to run manually on a isql session.

You may check information about Best Practices on the Best_Practices_SAP_ASE_v1.4.pdf guide attached to SAP Note 1680803 - SYB: SAP Adaptive Server Enterprise - Best Practice for SAP Business Suite and SAP BW   (check about "Table Statistics" on page 54).

Kind regards,

Victoria.

Former Member
0 Kudos

Thanks a lot.... Jeff and Kevin ... The different angles of scrutiny into the topic was just tremendous.

I have 650 partitions to the table. The index size being 250 gigs and data size 10 gigs.

I will surely get back , with a results to the SQL, to fetch columns with histograms, that might have returned my strange values to select datachange.

Once done, I shall reconstruct things :-

Try sp_flushstats if flush any in_memory stats. ( check datachange values )

and then delete statistics and Re-run update statistics to construct the latest updated statistics for that table.

Thanks Anand, Jeff and Kevin.. once again for immediate help and rescue operations .

I shall post result after the weekend.

Regards,

Ajith Prabhakaran

Former Member
0 Kudos

Dear Colleagues,

I have found something very similar, and I feel if I am hitting a bug.

2079837 - SYB: Avoid redundant statistics update due to CR 770415

  1. select datachange('S30.SAPSR3./BI0/F0SMD_PE2H',NULL,NULL
  2. go 
  3. --------------------------- 
  4.                    51328.034017 
  5. (1 row affected) 
  6. update all statistics "S30.SAPSR3./BI0/F0SMD_PE2H" 
  7. go 
  8. sp_flushstats 
  9. go 
  10. (return status = 0) 
  11. select datachange('S30.SAPSR3./BI0/F0SMD_PE2H',NULL,NULL)
  12. go 
  13. --------------------------- 
  14.                    51328.034017 

"to avoid running into CR 770415, we need to enforce no hashing for partitioned tables on older releases

if partcnt > 1 and ( dbrel < '15.7.0.132' or ( dbrel+0(4) = '16.0' and dbrel < '16.0.01.00' ) ).

The version I am using is ASE 15.7 on SP134. So I plan to raise a case with SAP.

What is your call?

kevin_sherlock
Contributor
0 Kudos

You didn't mention using hashing for statistics.  I don't see that in your syntax.  Perhaps you have and set it to be "sticky"?  Hashing statistics is known to be a bit buggy pre-16.0.   You might need to use the "no hashing" option of update stats command.

Try looking at this output:

set quoted_identifier on

select ObjectName=convert(varchar(20),object_name(c.id))

      ,ColName=convert(varchar(20),c.name)

      ,"HashSticky_Group1" = case

        when s.c1 & 6144 = 2048 then "Stats partial hashing sticky"

        when s.c1 & 6144 = 4096 then "Stats hashing sticky"

        when s.c1 & 6144 = 6144 then "Stats no hashing sticky"

        else "Not set"

        end

      , "Hashing Pll Sticky" = case

        when s.c1 & 65536 = 65536 then "Stats hashing parallel sticky" else "Not set"

        end

      , "Hash Low Domain Sticky" = case

        when s.c1 & 8192 = 8192 then "Stats hashing low domain sticky" else "Not set"

        end

      ,"Hash High Domain Sticky" = case

        when s.c1 & 16384 = 16384 then "Status hashing high domain sticky" else "Not set"

        end

      ,"Stats Sampling Sticky" = case

        when s.c1 & 256 = 256 then "Stats sampling sticky" else "Not set"

        end

from syscolumns c

                      INNER JOIN sysstatistics s

                      ON c.id = s.id

                       and s.formatid = 100

                       and s.statid = 0

                       and s.sequence = 1

                       and convert(varbinary,c.colid) = s.colidarray

                       and datalength(s.colidarray) <= 2

where object_name(c.id) = '/BI0/F0SMD_PE2H'

order by object_name(c.id),c.name,c.colid

Former Member
0 Kudos

Hello Kevin,

You are spot on. I actually did not use the featre "hashing"  while running Update Statitics.

Meanwhile I was trying to get the output for the query you pasted. I get all

Msg 207, Level 16, State 4:

Server 'S30', Line 1:

Invalid column name errors.

I am troubleshooting this.

Would it be better to run this an a input sql file?

Regards,

Ajith Prabhakaran

kevin_sherlock
Contributor
0 Kudos

I don't know what your error would have to do with the SQL that I pasted unless you are changing the object name:

where object_name(c.id) = '/BI0/F0SMD_PE2H'

I've seen above where you specify 'S30' in front of the name.  Not sure why you are doing that.  Just include the table name with no other qualifiers.  IE, run the SQL _exactly_ as I had posted it.  Or, post the _exact_ SQL you are running here so we can see the SQL code, and the error together.

Former Member
0 Kudos

1> select ObjectName=convert(varchar(20),object_name(p.id))

2> ,PartName=convert(varchar(20),p.name)

3> ,ColName=convert(varchar(20),c.name)

4> ,HistDt=convert(varchar(10),s.moddate,101)

5> ,Smp_Pct=str(s.c13,8)

6> ,Datachg=datachange("SAPSR3."+object_name(p.id),p.name,c.name)

7> from syspartitions p INNER JOIN syscolumns c

8> ON p.id = c.id

9> and p.indid between 0 and 1

10> LEFT JOIN sysstatistics s

11> ON c.id = s.id

12> and p.id = s.id

13> and s.formatid = 100

14> and s.statid = 0

15> and s.sequence = 1

16> and convert(varbinary,c.colid) = s.colidarray

17> and datalength(s.colidarray) <= 2

18> where object_name(p.id) = '/BI0/F0SMD_PE2H'

19> order by object_name(p.id),p.name,c.colid

20> go

Msg 207, Level 16, State 4:

Server 'S30', Line 1:

Invalid column name 'SAPSR3.'.

Hi Keven, just pasting the result. Please tell me, what needs to be corrected here.

kevin_sherlock
Contributor
0 Kudos

include all statements that I posted please.  I don't see the first part:

set quoted_identifier on

Before the SQL statement.

Former Member
0 Kudos

Hi Kevin,

set quoted_identifier on, was the culprit. When i removed it , I got the output .

1> select ObjectName=convert(varchar(20),object_name(p.id))

2> ,PartName=convert(varchar(20),p.name)

3> ,ColName=convert(varchar(20),c.name)

4> ,HistDt=convert(varchar(10),s.moddate,101)

5> ,Smp_Pct=str(s.c13,8)

6> ,Datachg=datachange("SAPSR3."+object_name(p.id),p.name,c.name)

7> from syspartitions p INNER JOIN syscolumns c

8> ON p.id = c.id

9> and p.indid between 0 and 1

10> LEFT JOIN sysstatistics s

11> ON c.id = s.id

12> and p.id = s.id

13> and s.formatid = 100

14> and s.statid = 0

15> and s.sequence = 1

16> and convert(varbinary,c.colid) = s.colidarray

17> and datalength(s.colidarray) <= 2

18> where object_name(p.id) = '/BI0/F0SMD_PE2H'

19> order by object_name(p.id),p.name,c.colid

20> go | tail -4

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 SMD_KEY              12/17/2015 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 SMD_KEY              12/17/2015 NULL                        0.000000

(4812285 rows affected)

The output is very very huge.

I took the top 10 results:

which is

ObjectName           PartName             ColName              HistDt     Smp_Pct  Datachg

-------------------- -------------------- -------------------- ---------- -------- ---------------------------

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       02/06/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       02/06/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       02/06/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       02/06/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       02/06/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       02/06/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       03/24/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       03/24/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       03/24/2014 NULL                        0.000000

/BI0/F0SMD_PE2H      /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP       03/24/2014 NULL                        0.000000