cancel
Showing results for 
Search instead for 
Did you mean: 

SPS6 Core Data Services - Maintained DDL entity not updating catalog object

former_member182500
Contributor
0 Kudos

Hi,

Implementing CDS DDL on AWS SPS6.  I had an issue with entity columns refering to table types, as described here:

http://scn.sap.com/thread/3400186

To try and move forward with development, I decided to respecify the two fields individually, rather than as a table type (flat structure with multiple fields).

So I redeclared table type fields as testf1 and testf2 in the hdbdd.  I commited and activated the changed hdbdd successfully, but when examining the catalog table object definition it had not changed (still refering to old field names).  I dropped the table (in actual fact right clicked catalog entry and selected delete), and recommited/activated the hdbdd artefact.

The table has not been re-created.

Unless I'm missing something the hdbdd specification does not seem to update existing catalog objects correctly.

Thanks for any help, as I'd like to continue with CDS as I prefer the declarative and reusage capabilities.

Accepted Solutions (1)

Accepted Solutions (1)

former_member182500
Contributor
0 Kudos

I'm suspecting this may be an authorisation issue, as I'm developing with my own created user rather than the "default" SYSTEM user, therefore perhaps missing SQL privileges for modify.

Will check and report back.

former_member182500
Contributor
0 Kudos

So I reconnected to Hana AWS using SYSTEM user, maintained my HDBDD and tried to reactivated to create the table with new definition.  Activation fails indicating an SQL modify issue:

This is with the SYSTEM user.  Perhaps a revert to HDBTABLE specifications.

former_member182500
Contributor
0 Kudos

Resorted to reverting back to earlier AMI before hdbdd and purchase table was maintained. 

I maintained the specification of another entity, this time the definition of a table WITHOUT table types, simply the adding of an additional column. The table was updated as expected.   Perhaps the issue to specific to table type expansion/change? 

Yet to be convinced on the maturity of CDS in this release as the outstanding issue regarding aligned ordering of columns between HDBDD definition and catalog tables still persists.

former_member182500
Contributor
0 Kudos

Yet another possible issue with CDS - updated entity (table) key, adding a second key field.  Committed and activated the HDBDD, table is not changed.

So DROPped the table in SQL console, reactivated the HDBDD, the table was not recreated.

This tool is supposed to allow creation and straightforward modification of objects forming a dictionary of data definitions and this is by far my preferred method of definition (coming from a Netweaver ABAP background).

Currently it seems a beautiful concept until the point of functioning, I would appreciate knowing if with present SPS6 rel. 60 release there is:

  1. A manual step I'm missing or
  2. Anyone else played with CDS SPS6, and experienced the same table type expansion/maintenance issues I face, or can perhaps suggest what step I may be missing (appreciated), or
  3. This issue has been formally identified by SAP and will be resolved in a forthcoming release (and so hold on HDBDD and progress with HDBTABLE for now).

Many thanks.

former_member182500
Contributor
0 Kudos

Following from previous post, where after adding a second key field to my entity description in HDBDD and activating which failed to alter the table, DROPping the table and reactivating which failed to recreate the table, I just tried removing one of the non-key fields and reactivating HDBDD.  There was now an activation error - "CDS activation failed - Activation of artefact ...failed because of an SQL error; check the database trace" (same error in screenshot a few posts earlier).

So I set up an SQL trace for my user SYSTEM and repeated.  Checking the trace, the activation is trying to ALTER the table to drop one of the columns, but the table does not exist.  Surely the HDBDD activation should check for EXISTENCE with "IF EXISTS' or some other, first, and if necessary performing a CREATE TABLE, before trying to alter?

From trace:

# Statement_Exception call (thread 4773, con-id 400744) at 2013-08-08 07:03:00.074553

# con info [con-id 400744, tx-id 32, cl-pid -1, cl-ip (internal)]

# ERROR QUERY: cursor_139906187325440_c744.execute(''' ALTER TABLE "MISSIONCONTROL"."hpl.missioncontrol.data::MC.Purchase.Item" DROP ( "CURRENCY" ) ''')

# Error call (thread 4773, con-id 400744) at 2013-08-08 07:03:00.074635

# con info [con-id 400744, tx-id 32, cl-pid -1, cl-ip (internal)]

# FAILURE OCCURRED AT: ptime/query/checker/check_table.cc:1965 # con info [con-id 400744, tx-id 32, cl-pid -1, cl-ip (internal)]

#             MESSAGE: invalid table name: hpl.missioncontrol.data::MC.Purchase.Item: line 1 col 30 (at pos 29)

I want to retain my context "Path" to my catalog objects (for example Purchase.Item), so cannot use HDBTABLE as you cannot have "." (point) in the file name other than for the extension, so created the table directly with SQL. 

I executed CREATE COLUMN TABLE with 2 primary key fields (order, item) and one non-key field (component).  I then hoped that by activating the HDBDD (now that the table exists), my table would be updated with the additional non-key fields (gross amount, currency, quantity, delivery date) taken from the entity definition. 

No - although the HDBDD activated successfully the table was not updated.  It really does seem CDS in present form is not sophisticated enough to handle re-creation or alteration of catalog objects - hope to be corrected on this point.

former_member182500
Contributor
0 Kudos

Most grateful if anyone with an AWS SPS6 able to perform a quick test with HDBDD along lines of following?:

  • Create a HDBDD with a table type (multiple field structure) and some column definitions.

  • Create an entity definition in the HDBDD, using a table type somewhere in the middle of the entity definition.

  • Activate HDBDD - in created catalog table object are your table type columns at the end of the table definition?

  • Modify the HDBDD entity primary key by adding an addition key field, and activate.  Has your catalog table object been altered correctly?

Many thanks.

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

I tested as you requested, although not on AWS. I used my local development system which is an internal SAP build. It has all the fixes of Rev62 plus a few that will go into the still in development Rev63. 

I created the type and the entities as you described in step 1 and 2.

Here is my source file:

namespace cdsTest.data;

@Schema: 'CDS_TEST'

context Test {

           type BusinessKey : String(10);

          type SDate : LocalDate;

          type CurrencyT : String(5);

          type AmountT : Decimal(15,2);

          type QuantityT : Decimal(13,3);

          type UnitT: String(3);

          type StatusT: String(1);

           Type HistoryT {

        CREATEDBY : BusinessKey;

        CREATEDAT : SDate;

        CHANGEDBY : BusinessKey;

        CHANGEDAT : SDate;

      }; 

     

    @Catalog.tableType : #COLUMN

    Entity Header {

        key  PurchaseOrderId: BusinessKey;

        nullable NoteId: BusinessKey;

        PartnerId: BusinessKey;

        Currency: CurrencyT;

        GrossAmount: AmountT;

        History: HistoryT;

        NetAmount: AmountT;

        TaxAmount: AmountT;

        LifecycleStatus: StatusT;

        ApprovalStatus: StatusT;

        ConfirmStatus: StatusT;

        OrderingStatus: StatusT;

        InvoicingStatus: StatusT;

    };  

};

My Catalog table places the table type columns exactly where I specified them. I also tested this on the TechEd systems (public revision 61) and it worked there as well.

>Modify the HDBDD entity primary key by adding an addition key field, and activate.  Has your catalog table object been altered correctly?

Here is what I added:

    @Catalog.tableType : #COLUMN

    Entity Header {

        key  PurchaseOrderId: BusinessKey;

        key  SecondTestKey: BusinessKey;

        nullable NoteId: BusinessKey;

        PartnerId: BusinessKey;

        Currency: CurrencyT;

        GrossAmount: AmountT;

        History: HistoryT;

        NetAmount: AmountT;

        TaxAmount: AmountT;

        LifecycleStatus: StatusT;

        ApprovalStatus: StatusT;

        ConfirmStatus: StatusT;

        OrderingStatus: StatusT;

        InvoicingStatus: StatusT;

    };

I do receive an error upon activation.

However this is because I'm adding a new column to an existing table and because its a key field it can't be nullable.  Therefore it needs a default value otherwise what should the system do with existing records (can't be nullable and we haven't given the system a value.

The trick is to drop the table, but you should NEVER do so directly in the catalog.  In some of your previous problems you went directly to the catalog and tried to drop or adjust the objects. As so as you do you get out of sync with the repository.  In fact a developer shouldn't even have drop privileges on repository managed schema. All these actions should be done by the design time objects themselves. 

Because I had no data (or didn't care about the data) and didn't wanted to add a default value, I decided to just drop.

I commented out the entity definition and activated the hdbdd file.  This caused a drop of the catalog table.  I then uncommented the entity definition and re-activated. This created the table new with the additional key column.

former_member182500
Contributor
0 Kudos

Hi Thomas,

Many thanks for your time on a Sunday!

I have been able to recreate the "misalignment" of columns in my AWS SPS6 revision 60 system, taking your example, with a small adjustment.  The first time I activated the HDBDD, directly using your entity Header example without change, all was OK and aligned between HDBDD and catalog table (note I created entity under additional context "Purchase" to reflect my earlier hierarchy).

I then added a second table type declaration in the Header entity, called ChangedHistory, after your History table type, as below, referring to the same table type HistoryT for simplicity:

namespace hpl.missioncontrol.data;

@Schema : 'MISSIONCONTROL'

context Test {

          type BusinessKey : String(10);

          type SDate : LocalDate;

          type CurrencyT : String(5);

          type AmountT : Decimal(15,2);

          type QuantityT : Decimal(13,3);

          type UnitT: String(3);

          type StatusT: String(1);

           Type HistoryT {

        CREATEDBY : BusinessKey;

        CREATEDAT : SDate;

        CHANGEDBY : BusinessKey;

        CHANGEDAT : SDate;

      };

     context Purchase {

    @Catalog.tableType : #COLUMN

    Entity Header {

        key  PurchaseOrderId: BusinessKey;

        nullable NoteId: BusinessKey;

        PartnerId: BusinessKey;

        Currency: CurrencyT;

        GrossAmount: AmountT;

        History: HistoryT;

        ChangedHistory: HistoryT;

        NetAmount: AmountT;

        TaxAmount: AmountT;

        LifecycleStatus: StatusT;

        ApprovalStatus: StatusT;

        ConfirmStatus: StatusT;

        OrderingStatus: StatusT;

        InvoicingStatus: StatusT;

    }; 

};

};

And the screenshot below shows ChangedHistory columns at the end of the table and therefore out of sequence with the entity description:

Thanks.

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

I can confirm that it does the same. I add the table type to the existing entity and it places it at the end of the catalog table (even in my internal build).  If you drop the table (using the method I described before) and then re-activate with the new table type it places it correctly.

That would lead me to believe that perhaps it's intentional that added table types to existing tables get added to the end.  I'm not sure why since column tables don't have fragmentation problems that row tables do when adding columns.  However I will send a mail to the developer tomorrow to confirm if this is intentional or not.

former_member182500
Contributor
0 Kudos

Hi Thomas,

Were you able to get a response from the developer regarding ordering of table types in maintained tables based on HDBDD entity specifications?

Thanks.

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

No response yet. I will be sure and post here once I hear anything back.

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

The developer was back from vacation today and responded.  He confirmed that the placement at the end of the table is as expected because this is how the Alter Table in SQL (which they generate behind the scenes) currently works.  There is no syntax in the Alter Table which they can use to position the fields.  Its a known limitation.  Current work around is as described to effectively drop the table first; of course one must consider the impact on the data in the table.  The development guide documentation writer was copied and I would expect to see some mention of this in a future revision of the documentation.

former_member182500
Contributor
0 Kudos

Thomas, thanks for following up, useful that its logged here now as a limitation and will be beneficial to have it noted in forthcoming guide.  Thanks again.

Former Member
0 Kudos

Hello Thomas,

could you please tell me how can I change a column data type in cds? I wanted to change from decimal (5,2) to decimal (10.2) and it doesn't work. I also don't want to drop the table, because this causes errors to all of my functions based on this table.

Could you please help me?

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

The only I know of is to drop the table (by comment out in the hdbdd file, activating and then commenting/changing and re-activating).

Former Member
0 Kudos

Ok thank you.

Unfortunately if I have 100 SQL functions and procedures associated one to another, if only 1 uses this table, then all functions will get errors and I will have to activate them all again.

Maybe you can suggest to the developers a fix for this. I also used a type, because I was not sure of the dimension, but if I change it now, all my tables will get errors. I also have data in the tables, so I have to move the data around to keep it safe.

Thank you again for the response,

Alex

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

I've not tried this myself so I can't say if it will work.  However have you tried using the Activate Anyway option on the hdbdd when you get the error with the dependent Functions and Procedures?  This will certainly break the procedures; but if you put the table back right away that shouldn't be a problem.

> so I have to move the data around to keep it safe.

Yes you are still responsible to backing up and restoring the data manually in this case.

Former Member
0 Kudos

Hi Thomas,

We have 100's of tables created under catalog using sql statements. Now we are planning to use HDB to create tables and procedures. Is there any way to generate HDB scripts for existing tables ? also when I try to Alter existing table e.g. SCHEMA1.TEST1 using HDB it creates a new table like SCHEMA1.PKG1::TEST1. This will create duplicate of existing table and I will need to change all existing objects with new table name which are referring this table. Can you please help me in this ?

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

>Is there any way to generate HDB scripts for existing tables ?

There is no SAP supported tool to do this. I wrote a little app for my personal use that does this. What release are you on?  I can send you a copy of this tool unsupported.


>using HDB it creates a new table like SCHEMA1.PKG1::TEST1

Yes this is simply how HDBDD works. It must prefix all objects created with the package.


>This will create duplicate of existing table and I will need to change all existing objects with new table name which are referring this table.

Yes this is true. No way around it. You can perhaps create a public synonym to redirect old calls to the new table. The downside to this approach is that synonyms are not transportable via DU.

Former Member
0 Kudos

Thank you so much Thomas. Really appreciate your quick response. We are on SPS08 Rivison 85. I also have couple of more questions. Appreciate your help on this.

1. What is the benefit of HDB tables over tables created using SQL. The only reason I am planning to use this is for version control and change management ? Is there any way to enable version control for catalog objects ?

2. Is there any way to view the change_id associated with an Object ? In our env multiple users are working on same objects. First time when any user activates any objects he associates the change_id with objects. Later on when other users are doing change on same object then they are not able to find change_id under which object is tracked.

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

>Really appreciate your quick response. We are on SPS08 Rivison 85.

My app is from SPS 09(Rev90), so it wont' run directly on SPS 08.  I used newer libraries. Your welcome to study though. The core concepts are certainly doable on older releases as well.

https://dl.dropboxusercontent.com/u/643382/HCO_XSOPEN_TBL_CONV.tgz

>What is the benefit of HDB tables over tables created using SQL. The only reason I am planning to use this is for version control and change management ?

As you said version control and change management. Also the ability to transport the content via Delivery Unit. As of SPS 09 we also have some great lifecycle management features. If you change the key structure, column order, data types, etc - upon activation we will export the data and re-import it into the table. Without HDBDD you have to do this yourself. Don't forget that we also have features SQL doesn't - like Associations. And now libraries like XSDS to use the entity and associations directly. In the near future we will generate OData services directly from the HDBDD definitions without the need for the XSODATA object.


>Is there any way to enable version control for catalog objects ?

No.

>Is there any way to view the change_id associated with an Object ?

Not in the standard tools. You could maybe write your own query for this.  The stardard tools only allow you search by status, change id, contributor, status, and release date.

>Later on when other users are doing change on same object then they are not able to find change_id under which object is tracked.

If the object is still associated with an open change ID, anything they do should be added to that change without them having to choose the specific one.

Former Member
0 Kudos

Thank you so much Thomas!! Can you share some documentation related to Transporter. I have created a Transporter route and able to successfully migrate DU's from Dev to Stage.Now if i have to migrate same DU's from stage to Prod then do I need to setup all DU's in stage and create a another route between stage to Prod ? if yes then how changes will be tracked in stage. We don't do any development directly in stage. So when I will migrate objects form Dev to Stage then they will automatically tracked under another change_id in stage which will be used by Transpoter to migrate changes from stage to Prod ?

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

>Can you share some documentation related to Transporter.

Well we are way outside the topic of this thread now. I would suggest reviewing the HANA Admin Guide for more details on DUs and HALM. I do believe that when you import a DU into a system that has change tracking turned on, the activation at the end of the DU creates a new change in that system.

Former Member
0 Kudos

Thanks Thomas. I will check Admin guide or create another post on this.

Former Member
0 Kudos

Hi Thomas Jung, could you provide me with the program you created to generate hdbdd files from existing tables? I am running on HCP trial environment... Thanks!!

Anup

thomas_jung
Developer Advocate
Developer Advocate
0 Kudos

The link to download it is in the above thread, but here it is again:

https://dl.dropboxusercontent.com/u/643382/HCO_XSOPEN_TBL_CONV.tgz

It won't be usable on HCP trial, however.  First because on HCP trial you don't have the rights to import your own DUs.  Even if you did, the HCP trial is not at SPS 09 yet and that's the release level this content was created on.  I do use features only available on SPS 09.  You could perhaps study the code, adjust it for the older Revision using alternative features, and manually implement it in your own schema I guess.

Answers (1)

Answers (1)

former_member182500
Contributor
0 Kudos

As an aside to this post, whereby addition of new table key fields and the ordering of those key fields is not reflected in the generated catalog object, I have just experienced this issue when adding a new key field to a standard .hdbtable definition (not CDS).  The newly added key field appeared at the end of the generated catalog object.

To have my definition order reflected in my generated catalog object I had to delete the .hdbtable definition and recreate (AWS Rev. 68).

former_member182500
Contributor
0 Kudos

As further aside, and as presented by Thomas in openSAP HANA 3, week 1 unit 4 CDS, the issues with adding of additional columns and changing column types in CDS is now resolved in SPS09.

Fantastic!

former_member182302
Active Contributor
0 Kudos

Thanks for the update Jon

0 Kudos

Thats wonderful, now we do not have delete the table and recreate. I have not yet gone thru wk1 will check it.