cancel
Showing results for 
Search instead for 
Did you mean: 

Numeric overflow error when querying large volume of data

patrickbachmann
Active Contributor
0 Kudos

Hi folks,

I have a calculation view that runs against large BSEG table and joins to some other tables, and it works fine until you select a larger date range.  For example 1 year works fine but if you choose 2 years of history (based upon BUDAT) we get an 'numeric overflow' error.  So I started dissecting the view and stripped out all of the outputted fields until it just outputted a single field and I got success for large volumes of data for this single field.  Next I started slowly adding fields back into the output until the numeric overflow error returned;  I then discovered some fields had a certain length in the source table BSEG such as NVARCHAR(1) and were defined wrong in the view output as NVARCHAR(2).  While this is baffling to me why this would cause a numeric overflow, I fixed these mismatches and those fields are now fine.  However there are some calculated measure fields that are still causing problems;

Example:

CASE WHEN B.SHKZG = 'H' THEN (B.DMBTR * -1) ELSE (B.DMBTR * 1) END AS "AMOUNT",

This case statement is simply checking value of credit/debit indicator and changing the number to a positive or a negative value and then outputting as AMOUNT.  This AMOUNT field is type DECIMAL 13,2 which happens to be the same exact definition as the source field DMBTR.  So I'm not sure why this simple case statement would cause a numeric overflow when selecting a larger date range, yet is running ok with a smaller date range.

I've started analyzing the visual execution plan, which is quite extensive and will take some time, but in the meantime I figured I would post this question in case somebody here see's an obvious problem with this case statement.

Thanks!

-Patrick

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi Patrick,

When we select large data sets, the measures will get summed up which might be causing numeric data flow.

Can you please increase the length of decimal to 20,2 and then give a try. This should resolve your problem.

Regards,

Venkat

rama_shankar3
Active Contributor
0 Kudos

Good point Venkat! Thanks!

Patrick, let us know whether this resolves your problem.

Thanks,

Rama

patrickbachmann
Active Contributor
0 Kudos

Actually I tried both of your ideas (cast and also changing to decimal 20,2) and neither worked.  Any other suggestions?

Former Member
0 Kudos

Well, depending on your time and patience...

It sounds like you're running the SQL against your CalcView, correct?

Do you get the same error when running SQL against base tables?

Can you try the calculation as a calculated attribute rather than in SQL?

Do you get the same error with a copy of the table? (I've never worked with BSEG but hear it's huge, so perhaps copying doesn't make much sense)

Can you make a small copy of the table and re-test? I.e. minimum columns required, reduce the data until you get to the point where the issue disappears?

If you create other test data tables and use the same structure/model (i.e. same data types, joins, etc), do you get the same error?

I.e. try to duplicate the error as many different ways as possible with the simplest data set and structure possible, and then once the problem disappears, compare the difference between the two iterations where problem exists and then disappears.

patrickbachmann
Active Contributor
0 Kudos

After extensive tinkering and pasting the code from my script view into SQL editor and running there I have discovered it's not the two measures causing the problems.  It's two date conversions which I had removed from the view output, however they were still in the SCRIPT box of the calculation view.  When I pasted everything from script box into SQL editor I quickly realized that once I took these date conversions out the numeric overflow error disappeared.

Here are the problems; 

TO_DATE(H.BUDAT, 'YYYYMMDD')

TO_DATE(M1.ERDAT, 'YYYYMMDD')

These were being converted to feed a microstrategy prompt in the report front end. 

-Patrick

patrickbachmann
Active Contributor
0 Kudos

I just want to elaborate more for you guys so you understand how I got led on the path of the measures being the issue instead of the date conversion.  Where the date conversion is the root of the problem, tinkering with the measures was having an affect on the results.  ie: there was still a bit of a relationship if you will.

For example my script type calculation view contains;

1) SCRIPT BOX

Contains SQL code including my many attributes, 2 calculated measures and 2 bad date conversions.  All of these fields were outputted to my 'Output of script'

2) OUTPUT BOX

This box contains a list of all the objects derived from my 'output of script' but here I experimented with removing these from the TRUE final output list.

Now, removing all but a single attribute from the OUTPUT box made the view run without error.  (Whilst leaving the bad date conversion still in step 1!).  When I added fields back one at a time I noticed when I added the calculated measures back the view would again fail.  So even though I have kept the bad date conversion in the step 1) script box I was able to get the error to occur or disappear by tweaking the calculated measures.  It's as though the resources required to do the bad date conversion was affecting how much data could be also converted in the calculated measure.  So there was a relationship.  But removing the bad date conversion completely from the script box then allowed my calculated measures to work fully.

I hope this makes sense, it's difficult to explain textually.

-Patrick

rama_shankar3
Active Contributor
0 Kudos

Makes sense. Thanks Patrick!

Answers (2)

Answers (2)

yeushengteo
Advisor
Advisor
0 Kudos

I agree with Jody that it is a bug.

Maybe you can try with amount of all negative (* -1) or positive (* 1) by hardcoding and see if you see the same error happens. Positive is not changing any by theory.

Former Member
0 Kudos

Not a clue, but just curious what happens if you use 1.0 or -1.0, and/or explicitly cast 1 or -1 to decimal data types?

Looks like a bug, you might consider opening an OSS message?