Skip to Content

How to traverse or get columns values dynamically from the table( HANA 1.0 Sp12) using sql script

Hi All,

I am stuck in below scenario.

For example, there is a test let’s call it ‘Current Calibration Data' is conducted on some product with some parameters like setpoint, readback, v_offset. Once the product is being tested it will generate the values for parameters. All these steps are saved in a flat-file(.txt) which is my input table(source) and from the input table I need to populate the status of test and parameters in target tables seq result and parameters.

Input table has around 25 columns and I need to access columns and row values dynamically in order to insert into different two tables.

I am using SQL script in Hana and now I am stuck on how I can control the columns' values and populate them into two different tables.

To achieve this, I am trying to use a mathematics trick 'transpose of a matrix'. let's take row 2 into internal table with additional column lets say dummy and create one more table Matrix which will have a dummy col and 5 other columns will have records as similar as transpose matric 0 and 1 only, something like as shown below. But in real-time it will be around 25-26 columns as I don't see any other solution to deal with this situation. if SAP Gurus may guide or suggest me what else can be done then I would be really grateful for them.

I can't use flowgraphs node pivot/unpivot because i need to deal with row and columns values.

Input Data:


INSERT INTO UNPIVOT_TABLE VALUES ('Voltage Regulation, AEBus (Coldfire)','','','','');


INSERT INTO UNPIVOT_TABLE VALUES ('0.000V','1.000V','0.098V','4','PASS'); INSERT INTO UNPIVOT_TABLE VALUES ('Load: 3R9','','','','');

INSERT INTO UNPIVOT_TABLE VALUES ('Current Calibration Data','','','','');


INSERT INTO UNPIVOT_TABLE VALUES ('0.000 A','0.100 A','-0.004 A','86','');

INSERT INTO UNPIVOT_TABLE VALUES ('85.000 A','85.022 A','85.000 A','86','');

INSERT INTO UNPIVOT_TABLE VALUES ('Current Regulation, AEBus (Coldfire) ','','','','');

INSERT INTO UNPIVOT_TABLE VALUES ('Level ','Setpoint','Readback','Actual Output','STATUS');

INSERT INTO UNPIVOT_TABLE VALUES ('10%','10.00 A','10.00 A','10.04 A','PASS');

INSERT INTO UNPIVOT_TABLE VALUES ('50%','50.00 A','50.00 A','50.06 A','PASS');

Target tables structure will be

1. Create column table SEQ_TABLE (ID integer primary key, SEQ_NO Integer, Test varchar(100), status varchar(100));

OUTPUT 1: SEQ_TABLE ID(PK) SEQ NO TEST STATUS 1 101 Voltage Regulation, AEBus PASS 2 102 Load 3R9 3 103 Current Calibration Data FAIL 4 104 Current Calibration Data FAIL 5 105 Current Regulatio AEBus (Coldfire) PASS 6 106 Current Regulatio AEBus (Coldfire) PASS

2.Create column table Patameter_table (ID integer primary key, Seq_result_ID integer, parameter varchar(100), value varchar(100) ); Attaching output table for reference only.

a.seq_reuslt_id come from

b.field SEQ_No can set the default to 101.

Thank you in advance for your guidance and help.

Thank You

Sunil Sharma

zgtfi.png (20.0 kB)
Add a comment
10|10000 characters needed characters exceeded

Related questions

1 Answer

  • Posted on Aug 24 at 02:51 PM


    Is this a real requirement, some theoretical exercise, or an interview question perhaps?! The reason I ask that is because there are no timestamps, equipment IDs, location data, etc. in the data that we might see on a production requirement.

    My first question would be - how are you loading the source file into HANA? The one thing you absolutely must have is a row sequence to maintain the order of the incoming data.

    Answer these questions, then we can get onto pivoting the data into separate tables.


    Add a comment
    10|10000 characters needed characters exceeded

    • So, if I were to be given this task, I would certainly question the design decisions! However, let's assume there is no other way to process this data. I'll give you a few pointers, then you can try and fill the gaps.

      The staging table requires some kind of import ID so that individual files can be processed, and as I mentioned before, you absolutely must maintain the sequence of data from the file, so it'll require a sequence ID (database management systems do not guarantee the order of data in tables, and many people do not realise this).

      Ideally we'd use declarative logic to process the data in the staging table, but I think imperative logic will make life easier for you. Perhaps someone else will provide a solution using declarative logic.

      To process the data you will require a procedure. The procedure will require an input parameter so we can pass the import ID. In the procedure declare a cursor to select data from the staging table using the import ID to read a specific file's data, and using an ORDER BY clause on the sequence ID column so we can read the data in the same order as it was in the file.

      Iterate over the file's data, an example here - Loop. Within the loop we can use variables to capture values and maintain state, make determinations about what data we're reading, and INSERT data into the final output tables.


Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.