cancel
Showing results for 
Search instead for 
Did you mean: 

How to traverse or get columns values dynamically from the table( HANA 1.0 Sp12) using sql script

er_sharmasunil
Explorer

Hi All,

I am stuck in below scenario.

For example, there is a test let’s call it ‘Current Calibration Data' is conducted on some product with some parameters like setpoint, readback, v_offset. Once the product is being tested it will generate the values for parameters. All these steps are saved in a flat-file(.txt) which is my input table(source) and from the input table I need to populate the status of test and parameters in target tables seq result and parameters.

Input table has around 25 columns and I need to access columns and row values dynamically in order to insert into different two tables.

I am using SQL script in Hana and now I am stuck on how I can control the columns' values and populate them into two different tables.

To achieve this, I am trying to use a mathematics trick 'transpose of a matrix'. let's take row 2 into internal table with additional column lets say dummy and create one more table Matrix which will have a dummy col and 5 other columns will have records as similar as transpose matric 0 and 1 only, something like as shown below. But in real-time it will be around 25-26 columns as I don't see any other solution to deal with this situation. if SAP Gurus may guide or suggest me what else can be done then I would be really grateful for them.

I can't use flowgraphs node pivot/unpivot because i need to deal with row and columns values.

Input Data:

->CREATE COLUMN TABLE UNPIVOT_TABLE (COL1 VARCHAR(100), COL2 VARCHAR(100), COL3 VARCHAR(100), COL4 VARCHAR(100), COL5 VARCHAR(100));

INSERT INTO UNPIVOT_TABLE VALUES ('Voltage Regulation, AEBus (Coldfire)','','','','');

INSERT INTO UNPIVOT_TABLE VALUES ('SETPOINT','READBACK','OUTPUT','V_Offset','STATUS');

INSERT INTO UNPIVOT_TABLE VALUES ('0.000V','1.000V','0.098V','4','PASS'); INSERT INTO UNPIVOT_TABLE VALUES ('Load: 3R9','','','','');

INSERT INTO UNPIVOT_TABLE VALUES ('Current Calibration Data','','','','');

INSERT INTO UNPIVOT_TABLE VALUES ('SETPOINT','READBACK','OUTPUT','I_Offset','');

INSERT INTO UNPIVOT_TABLE VALUES ('0.000 A','0.100 A','-0.004 A','86','');

INSERT INTO UNPIVOT_TABLE VALUES ('85.000 A','85.022 A','85.000 A','86','');

INSERT INTO UNPIVOT_TABLE VALUES ('Current Regulation, AEBus (Coldfire) ','','','','');

INSERT INTO UNPIVOT_TABLE VALUES ('Level ','Setpoint','Readback','Actual Output','STATUS');

INSERT INTO UNPIVOT_TABLE VALUES ('10%','10.00 A','10.00 A','10.04 A','PASS');

INSERT INTO UNPIVOT_TABLE VALUES ('50%','50.00 A','50.00 A','50.06 A','PASS');

Target tables structure will be

1. Create column table SEQ_TABLE (ID integer primary key, SEQ_NO Integer, Test varchar(100), status varchar(100));

OUTPUT 1: SEQ_TABLE ID(PK) SEQ NO TEST STATUS 1 101 Voltage Regulation, AEBus PASS 2 102 Load 3R9 3 103 Current Calibration Data FAIL 4 104 Current Calibration Data FAIL 5 105 Current Regulatio AEBus (Coldfire) PASS 6 106 Current Regulatio AEBus (Coldfire) PASS

2.Create column table Patameter_table (ID integer primary key, Seq_result_ID integer, parameter varchar(100), value varchar(100) ); Attaching output table for reference only.

a.seq_reuslt_id come from seq_table.id.

b.field SEQ_No can set the default to 101.

Thank you in advance for your guidance and help.

Thank You

Sunil Sharma

Accepted Solutions (0)

Answers (1)

Answers (1)

michael_eaton3
Active Contributor
0 Kudos

Hello

Is this a real requirement, some theoretical exercise, or an interview question perhaps?! The reason I ask that is because there are no timestamps, equipment IDs, location data, etc. in the data that we might see on a production requirement.

My first question would be - how are you loading the source file into HANA? The one thing you absolutely must have is a row sequence to maintain the order of the incoming data.

Answer these questions, then we can get onto pivoting the data into separate tables.

Michael

er_sharmasunil
Explorer
0 Kudos

Hi Michael,

Thank you for your response.

I am glad you asked questions out of your curiosity.

This is my current client assignment on which I am working. I am getting the flat-files (txt format). Also, data is not in structure format in these files but records are separated using tab delimiter. So it is like text unstructured data which I first converting into CSV file(just saving it with .csv extension) with tab-separated and then loading into HANA tables. The input table as I mentioned above is my staging table which in nothing but file raw data in the Hana table. Once the logic is defined for these flat-file then I will work on the ETL part to load data into HAN using cfg files.

As you are saying, so yes, the file contains equipment, date, location, and many other records that I was able to extract in other tables. Now I am stuck on how I can able to extract these records from row and columns as I mentioned above. I am attaching a staging table for your reference I hope this would give you more clarity on this. Let me know if you have any other questions.

staging-table.png

Thank You

Sunil Sharma

michael_eaton3
Active Contributor

So, if I were to be given this task, I would certainly question the design decisions! However, let's assume there is no other way to process this data. I'll give you a few pointers, then you can try and fill the gaps.

The staging table requires some kind of import ID so that individual files can be processed, and as I mentioned before, you absolutely must maintain the sequence of data from the file, so it'll require a sequence ID (database management systems do not guarantee the order of data in tables, and many people do not realise this).

Ideally we'd use declarative logic to process the data in the staging table, but I think imperative logic will make life easier for you. Perhaps someone else will provide a solution using declarative logic.

To process the data you will require a procedure. The procedure will require an input parameter so we can pass the import ID. In the procedure declare a cursor to select data from the staging table using the import ID to read a specific file's data, and using an ORDER BY clause on the sequence ID column so we can read the data in the same order as it was in the file.

Iterate over the file's data, an example here - Loop. Within the loop we can use variables to capture values and maintain state, make determinations about what data we're reading, and INSERT data into the final output tables.

Michael