cancel
Showing results for 
Search instead for 
Did you mean: 

How to force PowerDesigner to use ALTER TABLE command to change column from NULL to NOT NULL

Former Member
0 Kudos

Hi,

we're challenging a problem with PowerDesigner 16.6 SP02 when applying model changes to database. We need to change one column which allows NULL values (it isn't mandatory yet) to disallow them (i.e. set this column mandatory). We are using Oracle 11g DBMS with some modifications of generated SQL scripts (some pre and post actions around certain DDL operations).

The problem is that PD is generating table recreation, i.e. rename it with tmp_ prefix, create new one with NOT NULL column and insert values from old one to new one. This is unwanted behavior from two reasons:

1) The table is big and it is lot of data to move (thus it takes too much time during software upgrade process).
2) There is no possibility to use our DBMS file modification (under ORA11GR1::Script\Objects\Column\ModifyColnNull) because this node is not used in the generated SQL (the table is dropped and created again).

I have made some investigation of this behavior. When changing from NOT NULL to allow NULLs, it is using the node (ModifyColnNull) from DBMS tree (and generates SQL command: ALTER TABLE table MODIFY column NULL). It is the same with Oracle12c and SQL Server 2014 DBMSs (we are using Oracle and MS SQL Server installations). What is strange is fact, that in Postgre SQL 9.x DBMS it is not working even when changing from NOT NULL to NULL. So it seems that the decision if use ALTER TABLE or table re-creation way is somehow affected by model's DBMS file. And, of course, in the "Apply model changes to database" dialog we have selected "Use alter statements when possible" and "Backup tables" options.

Can anybody help us with it?

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Try populating the column with values and then running ddl that changes it to not null.

former_member200945
Contributor
0 Kudos

The behavior you have seen is correct.

The reason for this is that by recreating table and copying data into the table, we guarantee no data loss will occur.

To get ALTER TABLE statement, the following conditions must be satisfied :

1) The column must be not mandatory

2) The column must be either at beginning or at end of the table. The order of the column matters.

Former Member
0 Kudos

Hi Phillip,

thanks for reaction. But I have slightly different opinion on our case.

If you are making the column mandatory (setting it NOT NULL) and there exist NULL values in such column, the data will be lost in PD's approach (the INSERT-SELECT statement will fail in Oracle and no data will be copied to new table version). And, of course, ALTER TABLE approach will fail to modify the column due to same reason (but data will persist in original table).

In our system we have biggest tables with 100 millions of records, approaching 100GB of space and almost twice of space used by connected indexes. If we drop these indexes and references, copy the data, and create indexes and references again, we will lost a lot of time (our customers operates in 24/7 mode). This approach is simply unacceptable for us.

If PD will generate simple ALTER TABLE we can "catch" this event in DBMS\Script\Objects\Column\ModifyColnNull script and run our custom actions to fix column values (make them NOT NULL for all rows in the table), at this point we know the table and column name. This way is much more faster and safer than table re-creation. And in that case we really not lose any data.

In other case (table re-creation) we don't know the original column name, which is going to be altered. I made some investigation of DBMS scripts for Table and Column object but I didn't find any node, where the INSERT-SELECT command might be generated. So we are not able to modify this behavior (to fix the column value to be NOT NULL). We need to modify this behavior automatically, because we are planning to use PD in automatic deployment process where the database model modification script will be generated automatically (we will compare customer's current model with the new one, which will be installed). And if the simple "ALTER TABLE table MODIFY column NOT NULL" command is split to sequence of drop constraints, indexes and references, then table creation and data copy, and after the constraints, indexes and references re-creation; we are just not able to automate this. There are more modifications (some new references, tables, indexes), not only the one from the alter column. And all of them are mixed together (by the dependency logic, dropping first and so on ...). So when the final script is generated, we are not able to parse and change it, because we don't know which DROP/CREATE is generated as regular (from our perspective "valid") operation and which on is the result of ALTER TABLE "split". So we really need this simple ALTER TABLE command in our case.

As from this view I don't see any way how to solve our issue. It is currently showstopper for us. So if you have any proposal, I would really appreciate that.

And finally. Is this behavior really hard-coded in the PD core, or it is somehow affected by PDM's DBMS? I'm still thinking this way, because for instance in Postgre SQL 9.x it is not able to generate ALTER TABLE command for changing column to allow NULLs. In Oracle 11g, 12c and SQL Server 2014 (those I have tested) is this opposite way possible. So, maybe some "dirty" modification of DMBS xdb file will lead to use ALTER TABLE command in both ways. But, that's only my wish and speculation.