on 12-17-2015 6:54 AM
Dear Colleagues,
One of the scheduled UPDATE STATISTICS job, failed to run.
Hence I was trying to execute it manually.
1> update index statistics 'S30.SAPSR3./BI0/F0SMD_PE2H'
2> go
Msg 102, Level 15, State 181:
Server 'S30', Line 1:
Incorrect syntax near 'statistics'.
My table name is S30.SAPSR3./BI0/F0SMD_PE2H. Though it looked very simple, but i spend a day, with all possible ways to get rid of the syntax error, but all in vain. I wanted to run with sampling option so as to improve the time consumption.
Please give me some helping hand on this. I am running on ASE 15.7 , SP134.
Hello Ajith,
Please use following commands
1> set quoted_identifier on
2> go
1> update index statistics SAPSR3."/1CN/CPASAP00001"
2> go
1>
If you have any doubt refer following sap note:-
1757691 - SYB: How to manually update index statistics of a table using isql
Regards
Anand
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks a lot Anand. That worked !!!
I was wondering, why it did not improve the value of datachange() function even after running update stats successfully. The value still looks very unhealthy.
1> set quoted_identifier on
2> go
1> update index statistics SAPSR3."/BI0/F0SMD_PE2H" with sampling = 2 percent
2> go
1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)
2> go
---------------------------
48377.814731
(1 row affected)
1> update all statistics SAPSR3."/BI0/F0SMD_PE2H" with sampling = 2 percent
2> go
1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)
2> go
---------------------------
50181.052529
1> sp_recompile 'S30.SAPSR3./BI0/F0SMD_PE2H'
2> go
Each stored procedure and trigger that uses table 'S30.SAPSR3./BI0/F0SMD_PE2H' will be recompiled the next time it is executed.
(return status = 0)
1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)
2> go
---------------------------
50181.052529
(1 row affected)
Any idea about this?
Dear Anand and other dear colleagues
My datachange() shows 50181.052529% , after running Update stats without errors. What does it actually mean? Following are my concerns.
1.) Does Update Statistics reset the datachange() ?
2.) What else other than ( moddate, datachange() ) can confirm the necessity of Update Stats ?
Kind Regards,
Ajith Prabhakaran
datachange was meant to be run on a per column per partition basis. Running it on the entire table results in grossly aggregated percentages as you have seen. Were I you, I would write a query against syspartitions joined with syscolumns with datachange() to see which columns/partitions are the ones most likely tripping the issue.
Also, by running update all statistics, you likely caused some real fun for yourself. ASE's optimizer can use stats on unindexed columns to cost row estimates as part of the decision making process about which way to join tables, etc. So, sometimes this is done on columns that typically don't index well (extremely low cardinality such as gender) especially if highly skewed so that ASE picks the best plan based on better row estimates. However, unless you keep these updated, you will potentially wreak havoc on queries that hit old/stale stats. You might want to use delete statistics to clear these out and then run update index statistics again.
Not sure about your update stats and datachange values. Make sure you run "sp_flushstats" just to see if any in-memory value isn't making it to disk or some such problem. Perhaps sampling is causing your datachange values not to be reset. Also, is this table partitioned (ie, does it have more than 1 partition)?
Keep in mind a few things:
1. datachange(<tablename>,null,null) returns the maximum datachange value over only those columns which have histograms (statistics).
2. datachange function should probably be only used to determine which tables/partitions do NOT need updated stats. (in my opinion)
3. Running update stats on a global index of a partitioned table will not reset datachange values.
4. Determining WHEN to run update stats is more of an art form, depending on your specific environment. There are no hard, standard rules to apply here to say a table/column _needs_ update stats. Some use datachange, while others use the various "derived_stat()" numbers, etc. Most shops simply chose to opt-in a group of tables to have update stats job run on a regular basis.
To take a closer look, run this SQLto see where your values are coming from:
set quoted_identifier on
select ObjectName=convert(varchar(20),object_name(p.id))
,PartName=convert(varchar(20),p.name)
,ColName=convert(varchar(20),c.name)
,HistDt=convert(varchar(10),s.moddate,101)
,Smp_Pct=str(s.c13,8)
,Datachg=datachange("SAPSR3."+object_name(p.id),p.name,c.name)
from syspartitions p INNER JOIN syscolumns c
ON p.id = c.id
and p.indid between 0 and 1
LEFT JOIN sysstatistics s
ON c.id = s.id
and p.id = s.id
and s.formatid = 100
and s.statid = 0
and s.sequence = 1
and convert(varbinary,c.colid) = s.colidarray
and datalength(s.colidarray) <= 2
where object_name(p.id) = '/BI0/F0SMD_PE2H'
order by object_name(p.id),p.name,c.colid
Columns which have histograms will show up with a "HistDt" value of the last date update stats were done for the column. For each of these types of columns, the "max" value of those will be what is returned by "select datachange(...., null,null) "
Dear Colleagues,
As said. This table is having 650 partitions ,. The index size being 250 gigs and data size 10 gigs.
Since the update statistics do not look like doing any much better on the statistics. Would you recommend to delete statistics, and then run the update all statistics on this table.
My concerns:
How bad would this affect the business? ( During the process of delete statistics and re run of update statistics )
Which Update Statistics command would the best suit, to run here , in that case. ( After delete statistics )
My schedule ( dbacockpit) jobs often, give the following error stating unable to write statistics for this table.
"0:0010:00000:00011:2015/12/28 03:45:31.76 server Unable to write statistics for object 1043999194 in database 4. Please run update statistics on this table after loading this database or prior to accessing the table in the loaded database."
Please Advise.
Kind Regards,
Ajith Prabhakaran
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Dear Colleagues,
An Update :-
I deleted the statistics for the respective table (This table is having 650 partitions ,. The index size being 250 gigs and data size 10 gigs.)
And then Ran the Update All Statistics for the table to recrate the statistcis. It took around 2 minutes to complete. Now I do not get any sort of messages in errorlogs. ( such as : Unable to write statistics for object 1043999194 in database 4. Please run update statistics on this table after loading this database or prior to accessing the table in the loaded database.).
Following are the results:-
1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)
2> go
---------------------------
47299.770467
1> set quoted_identifier on
2> go
1> delete statistics SAPSR3."/BI0/F0SMD_PE2H"
2> go
1> update all statistics SAPSR3."/BI0/F0SMD_PE2H"
2> go
1> select datachange ('S30.SAPSR3./BI0/F0SMD_PE2H',null,null)
2> go
---------------------------
0.000000
datachange is of course meant to be run on a per column per partition basis. Perhaps running it on the entire table give me grossly aggregated percentages. Still the value has now changed and looks very much satisfying.
Regards,
AJ
Hi AJ,
The mentioned object comes from a BW Infocube.
Please keep in mind that for BW environments (used on Solman also) we do not recommend to run statistics manually. The statistics should be updated in a Process Chain. BW code adapted for ASE will determine how to perform the update stats.
So basically the supported ways to update statistics on a BW Infocube's fact tables and dimension tables are:
1. via RSA1->"Infocube"->Manage->Performance->Refresh Statistics
OR
2. via "Refresh Statistics / "Construct DB statistics" step in the RSA1 - Process Chain definition, mainly after a DTP.
You may use the report RSSYBREFRESHINFOCUBESTATISTICS at InfoCube level (be sure to have the last version of SAP Note 2138611 - SYB: Better table statistics for star schema tables of an InfoCube implemented).
Kind regards,
Victoria.
Hi Ajith,
ATM from DBACockpit should also be used of course, but for "manual" update stats we recommend to follow the options mentioned previously, not to run manually on a isql session.
You may check information about Best Practices on the Best_Practices_SAP_ASE_v1.4.pdf guide attached to SAP Note 1680803 - SYB: SAP Adaptive Server Enterprise - Best Practice for SAP Business Suite and SAP BW (check about "Table Statistics" on page 54).
Kind regards,
Victoria.
Thanks a lot.... Jeff and Kevin ... The different angles of scrutiny into the topic was just tremendous.
I have 650 partitions to the table. The index size being 250 gigs and data size 10 gigs.
I will surely get back , with a results to the SQL, to fetch columns with histograms, that might have returned my strange values to select datachange.
Once done, I shall reconstruct things :-
Try sp_flushstats if flush any in_memory stats. ( check datachange values )
and then delete statistics and Re-run update statistics to construct the latest updated statistics for that table.
Thanks Anand, Jeff and Kevin.. once again for immediate help and rescue operations .
I shall post result after the weekend.
Regards,
Ajith Prabhakaran
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Dear Colleagues,
I have found something very similar, and I feel if I am hitting a bug.
2079837 - SYB: Avoid redundant statistics update due to CR 770415
"to avoid running into CR 770415, we need to enforce no hashing for partitioned tables on older releases
if partcnt > 1 and ( dbrel < '15.7.0.132' or ( dbrel+0(4) = '16.0' and dbrel < '16.0.01.00' ) ).
The version I am using is ASE 15.7 on SP134. So I plan to raise a case with SAP.
What is your call?
You didn't mention using hashing for statistics. I don't see that in your syntax. Perhaps you have and set it to be "sticky"? Hashing statistics is known to be a bit buggy pre-16.0. You might need to use the "no hashing" option of update stats command.
Try looking at this output:
set quoted_identifier on
select ObjectName=convert(varchar(20),object_name(c.id))
,ColName=convert(varchar(20),c.name)
,"HashSticky_Group1" = case
when s.c1 & 6144 = 2048 then "Stats partial hashing sticky"
when s.c1 & 6144 = 4096 then "Stats hashing sticky"
when s.c1 & 6144 = 6144 then "Stats no hashing sticky"
else "Not set"
end
, "Hashing Pll Sticky" = case
when s.c1 & 65536 = 65536 then "Stats hashing parallel sticky" else "Not set"
end
, "Hash Low Domain Sticky" = case
when s.c1 & 8192 = 8192 then "Stats hashing low domain sticky" else "Not set"
end
,"Hash High Domain Sticky" = case
when s.c1 & 16384 = 16384 then "Status hashing high domain sticky" else "Not set"
end
,"Stats Sampling Sticky" = case
when s.c1 & 256 = 256 then "Stats sampling sticky" else "Not set"
end
from syscolumns c
INNER JOIN sysstatistics s
ON c.id = s.id
and s.formatid = 100
and s.statid = 0
and s.sequence = 1
and convert(varbinary,c.colid) = s.colidarray
and datalength(s.colidarray) <= 2
where object_name(c.id) = '/BI0/F0SMD_PE2H'
order by object_name(c.id),c.name,c.colid
Hello Kevin,
You are spot on. I actually did not use the featre "hashing" while running Update Statitics.
Meanwhile I was trying to get the output for the query you pasted. I get all
Msg 207, Level 16, State 4:
Server 'S30', Line 1:
Invalid column name errors.
I am troubleshooting this.
Would it be better to run this an a input sql file?
Regards,
Ajith Prabhakaran
I don't know what your error would have to do with the SQL that I pasted unless you are changing the object name:
where object_name(c.id) = '/BI0/F0SMD_PE2H'
I've seen above where you specify 'S30' in front of the name. Not sure why you are doing that. Just include the table name with no other qualifiers. IE, run the SQL _exactly_ as I had posted it. Or, post the _exact_ SQL you are running here so we can see the SQL code, and the error together.
1> select ObjectName=convert(varchar(20),object_name(p.id))
2> ,PartName=convert(varchar(20),p.name)
3> ,ColName=convert(varchar(20),c.name)
4> ,HistDt=convert(varchar(10),s.moddate,101)
5> ,Smp_Pct=str(s.c13,8)
6> ,Datachg=datachange("SAPSR3."+object_name(p.id),p.name,c.name)
7> from syspartitions p INNER JOIN syscolumns c
8> ON p.id = c.id
9> and p.indid between 0 and 1
10> LEFT JOIN sysstatistics s
11> ON c.id = s.id
12> and p.id = s.id
13> and s.formatid = 100
14> and s.statid = 0
15> and s.sequence = 1
16> and convert(varbinary,c.colid) = s.colidarray
17> and datalength(s.colidarray) <= 2
18> where object_name(p.id) = '/BI0/F0SMD_PE2H'
19> order by object_name(p.id),p.name,c.colid
20> go
Msg 207, Level 16, State 4:
Server 'S30', Line 1:
Invalid column name 'SAPSR3.'.
Hi Keven, just pasting the result. Please tell me, what needs to be corrected here.
Hi Kevin,
set quoted_identifier on, was the culprit. When i removed it , I got the output .
1> select ObjectName=convert(varchar(20),object_name(p.id))
2> ,PartName=convert(varchar(20),p.name)
3> ,ColName=convert(varchar(20),c.name)
4> ,HistDt=convert(varchar(10),s.moddate,101)
5> ,Smp_Pct=str(s.c13,8)
6> ,Datachg=datachange("SAPSR3."+object_name(p.id),p.name,c.name)
7> from syspartitions p INNER JOIN syscolumns c
8> ON p.id = c.id
9> and p.indid between 0 and 1
10> LEFT JOIN sysstatistics s
11> ON c.id = s.id
12> and p.id = s.id
13> and s.formatid = 100
14> and s.statid = 0
15> and s.sequence = 1
16> and convert(varbinary,c.colid) = s.colidarray
17> and datalength(s.colidarray) <= 2
18> where object_name(p.id) = '/BI0/F0SMD_PE2H'
19> order by object_name(p.id),p.name,c.colid
20> go | tail -4
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 SMD_KEY 12/17/2015 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 SMD_KEY 12/17/2015 NULL 0.000000
(4812285 rows affected)
The output is very very huge.
I took the top 10 results:
which is
ObjectName PartName ColName HistDt Smp_Pct Datachg
-------------------- -------------------- -------------------- ---------- -------- ---------------------------
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 02/06/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 02/06/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 02/06/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 02/06/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 02/06/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 02/06/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 03/24/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 03/24/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 03/24/2014 NULL 0.000000
/BI0/F0SMD_PE2H /BI0/F0SMD_PE2H_0000 KEY_0SMD_PE2HP 03/24/2014 NULL 0.000000
User | Count |
---|---|
75 | |
9 | |
8 | |
7 | |
7 | |
6 | |
6 | |
6 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.