cancel
Showing results for 
Search instead for 
Did you mean: 

IDM assignments

Sankar_Aravind
Participant
0 Kudos

Hi All,

Good Evening.

We have read some posts in here related to my query, but i am looking for some more details on

1) Is there any massway to assign the privileges in IDM business role/privileges assignment screen like uploading all privileges to particular systems either through note pad or excel or any similar solution to assign mass privileges to users ?

2) I have seen some answers where the experts discussed about some parameters like MXREF_MX_PRIVILEGE and jobs (not sure through if my above query is relates this object) , but in analyst perspective where we dont provide access to any developer Studios, and we just provide the access IDM role assignment screen, how can we achieve the mass assignment ?

or is there any other easy suggesion to achieve this understanding that we wont provide any access to temp tables changes/developer sutios. Please advise the steps.

Thank you.

Accepted Solutions (0)

Answers (3)

Answers (3)

richard_pietsch
Active Contributor

Hi,

the problem is, that with the regular "change user" task you can change only the selected user, not mass processing. In my use case, the user administration want to assign various privileges to one or more users.

So, what I did is:
1) create a self service form with a file upload attribute, another single select attribute for the repository to be processed, I also added a attribute named JOB_ID where I set the name of the job triggered (step 3)


2) create a post process for this form with two process steps; the first one copies the uploaded file into a pre-defined folder (and copies a backup into another one); the second process triggers a job which processes the request



The job trigger process creates a temporary global variable using function uSetGlobalVar which holds the ID of the user who executed the process. That's because the attribute values of the UI form are stored temporary on this ID and I need these values later in the job.

3) create a job that process the necessary actions; in my case a z-table is filled from the upload, the priviege assignments are done and afterwards the z-table as well as the global variable is cleared again



To get the information needed (filled by the user in the UI form) I get the global variable value again with function uGetGlobalVar. Having the userID again, I can use function uIS_GetValue to get the input values of the Z-attributes from the UI form.

With this kind of configuration I cover quite a lot of processes, not only mass processing (e.g. mass locks/unlocks of users, privilege replacements, attribute maintenance,...).

Regards, Richard

Steffi_Warnecke
Active Contributor

I also use jobs, that are started via UI masks.

  • Jobs without any additional input are started via multi-selection fields (with all available jobs) and the jobid and some script triggers it.
  • The more complicated ones with needed input (like filename etc) are available for some users via the IDM admin UI for a seperate repository.

But I would love to achieve it the way you did. I never worked with global variables before, but what I would love to know first:

If you use the file upload field: isn't the file first saved in the IDM database? How are you copying it to your preferred location?

.

Regards,

Steffi.

richard_pietsch
Active Contributor

Hi Steffi,
the file to be uploaded is stored as attribute value on the MSKEY of the executing user. Therefore I do a simple file copy to a server path and delete the temp. attribute afterwards. The copy process is a To Generic pass with some input attributes:

Within the script a fix file name is set depending on the selected job activity (this one is requested by the pre-defined jobs); one copy is stored in the "processing" folder, another with a timestamp is stored in a backup folder (that's to check the file afterwards in case of any errors during the process). Here, I use internal functions such as uFromHex and uToFile.

// Main function:custom_copyFileToShare
// script stores the uploaded file in the process directory 
// with a fix filename depending on the activity and in the
// backup directory with a timestamp 

function custom_copyFileToShare(Par) {

    var userMskey = Par.get("MSKEY");
    var activity = Par.get("Z_JOB_ACTIVITY");
    var fileAttribute = Par.get("FILE_ATTRIBUTE");
    var DirectoryProcess = Par.get("FILE_DIRECTORY_PROCESS");
    var DirectoryBackup = Par.get("FILE_DIRECTORY_BACKUP");

    // transform "\" into "\\"
    //******************************
    DirectoryProcess = uReplaceString(DirectoryProcess, "\\", "\\\\");
    DirectoryBackup = uReplaceString(DirectoryBackup, "\\", "\\\\");

    // get filename from binary
    //******************************
    var fileArray = fileAttribute.split(":");
    var filefullname = fileArray[0];
    var binary = fileArray[1];
    var filePlain = uFromHex(binary);

    // set process filename depending on activity
    //*************************************************
    var processfilename = "";
    if (activity == "Lock user") {
        processfilename = "UserLockList.csv";
    } else if (activity == "Unlock user") {
        processfilename = "UserUnlockList.csv";
    } else if (activity == "Change validity") {
        processfilename = "UserValidity.csv";
    } else if (activity == "Role assignment") {
        processfilename = "UserRoleAssignment.csv";
    } else if (activity == "Update company list") {
        processfilename = "CompanyList.csv";
    } else if (activity == "Update company assignments") {
        processfilename = "CompanyAssignmentList.csv";
    } else if (activity == "Role replacement") {
        processfilename = "RoleReplacementList.csv";
    } else {
        uSkip(2);
    }

    // add timestamp to filename
    //******************************
    var file = processfilename.split(".");
    var desc = file[0];
    var ext = file[1];
    var timestamp = custom_getTimestamp();
    var backupfilename = desc + "_" + timestamp + "." + ext;

    // set file paths
    //******************************
    var path_process = DirectoryProcess + "\\" + processfilename;
    var path_backup = DirectoryBackup + "\\" + backupfilename;

    // store files in directories
    //******************************
    uToFile(path_process, filePlain, "FALSE");
    uWarning("Current file has been downloaded to Share [" + path_process + "]");
    uToFile(path_backup, filePlain, "FALSE");
    uWarning("Backup file has been downloaded to Share [" + path_backup + "]");

    return Par;
}
Sankar_Aravind
Participant
0 Kudos

Thanks for all your replied.

May i know if there is any way in IDM screen or through webdynpro mode whether we can see like a button to upload the file and IDM read and upload the priviliges in mass way.

The who agenda is to use Mass assignment of priviliges without giving access to Eclipse or other tools.

former_member2987
Active Contributor
0 Kudos

Hi there,

I would strongly consider how you are doing this the way that you are considering as there are some potential security issues. The method that Steffi is using would be much more secure.

If you have a UI process that just uploads from a file, there are fewer controls to ensure that the file contents are correct. If you run this from a job, you can potentially view the contents of the file from the UI, or navigate.

Now it's probable that you have some access controls on the UI process and the file location where the upload file is stored (and how it gets generated) but it just does not feel terribly secure to me.

So I guess take this for just another opinion remembering that opinions are like noses, we all have one! 🙂

Good luck!

Matt

richard_pietsch
Active Contributor
0 Kudos

Hi Matt,
quite a good point you mentioned.
I have UI access controls active for the UI forms; the file content is checked within the initialization scripts of a pass.
The file structure is basically pre-defined as it's the generated output of another tool.
Regards, Richard

Steffi_Warnecke
Active Contributor
0 Kudos

matt.pollicove I don't think my process is so much more secure, because I also just read in the content of a file with certain expected content (first step of the job it the "from ASCII" to a temp database) and then use that going forward.

The only difference I can see is, that I created a whole new repository for this, so users can give their parameters (filename, which repository etc) via repository constants which I then use on the jobs they start.

.

In that regard my way is probably more insecure, because for the jobs with input parameters I needed to give them access to the system configuration tab of the admin UI, where they could wreak all kinds of havoc within the repositories.

.

That's why I like Richards approach, because I have more control over what they can actually see on the UI mask, when I can just create one instead giving them access to predefined ones.

.

Regards,

Steffi.