Hi Folks,
as you can tell, my question is more of a process than technical nature and I'm basically looking for some ideas and how-to's.
Here is what we already have in place to make our development guidelines available:
Where this often breaks down, is that the PDF-doc doesn't get send out in the first place and there's also no formal process to have the developer aknowledge that s/he received, read and understood the guidelines and plans to adhere to them.
Have any of you implemented some kind of process within SAP to streamline and document this? I'm for example wondering if it could work to send the PDF-document via the SAP Business Workplace as an express document to new developers, and have them aknowledge that they received and read them via response message. They'd then only get their developer keys after we received this response. Are there perhaps even some standard workflows for something like this?
Do you have other suggestions based on what is already done elsewhere?
Thanks for any feedback you have!
Cheers
Baerbel
This is so funny, because it's one of the simple examples I dreamt up for the Workflow book (Ch. 16.3.2 in the third edition if you have a copy somewhere in your office).
To summarize: Put a dummy task with all the information as the task text in a workflow, and use the task's 'confirm end of processing' flag to act as a confirmation. As part of your onboarding send it to them.
But when I read "we don't have a proper QA-process (yet)" I would really focus less on guidelines and more on code quality. Time spent on this type of thing is better spent, I've seen some truly awful code that confirms to guidelines, and on the flipside there is no one size fits all when it comes to guidelines. Use code inspector and start adding the worst transgressions to your custom checks.
I've worked in many places. In some, developers have a good attitude and you can trust them to read and adhere to the standards. In other places, the only way to get them to adhere is to stand over them with a hand gun (I think this is called "pair" or "extreme" programming). Seriously, if you've got externals who won't do what they're told - sack 'em. For internals, you just have to do whatever the social acceptable form of beating is in your organisation. I've no time for people who won't adhere to a few simple rules - either they're not to be trusted, or they're incompetent.
I have a good attitude, so I always make sure to read the standards and adhere to them - not matter how daft I might find some of the them. (e.g. one client who insists that all variables must be declared at the top of the code unit - well, that'll work well when 7.4 turns up...with inline declaration!)
Peer code review is a good approach if you don't have enough resource to have a separate QA section. It also means that you've got (supposed) experts doing the work. This does rely on attitude. I remember one project I was on, where it was done diligently, as the developers realised that they could learn off each other not just the standards, but good techniques. It became a game to find violations. In another project, the developers found that the regulatory authorities were happy with a signature that a review had been carried out - so they just signed off every development without looking at them. I was in charge of all the developers across the project, and did a few spot checks and discovered this - unfortunately, the higher management team didn't give a monkey's about actual quality, so they took no action.
I did a webinar for SCN 9 years ago on this subject, but the video appears to be no longer available. However, this blog is worth reading and you can see my comment there. https://blogs.sap.com/2015/01/27/code-review-success-factors/
An important factor in peer code review is that a junior reviews a senior and vice versa. That way everyone benefits and it spreads the load and programming knowledge and good practice.
If your standards don't contain too much that's irrelevant, that will help. Concise standards are far more likely to be adhered to than a fifty page document. Don't sweat the small stuff. I strongly recommend removing anything that says that, e.g. local variables must be prefixed with l_ . Rather follow Horst Kellers guidelines in the book "Official ABAP Programming Guidelines".
A few years ago, I got to define the standards for a client.
All code must be:
That was it. But that was with a small, close knit team. For a larger team, you do have to enlarge on what those terms means and give guidelines for achieving them.
For most clients, I have to sign I've read the guidelines and standards and will adhere to them. For some, everyone now and then I have to sign I've re-read them. For one, this is achieved and recorded through LSO. In their validated systems, there are ATC checks that must be passed. This works quite well, as it has spotted some programming issues that I'd missed. If you have some processes and restrictions that are an absolute must, put them into ATC and block release of transports that don't pass the checks.
But to my mind, for improving code quality and ensuring adherance to the rules, nothing beats a peer code review (with spot checks to spot sign offs without checking).
Hi,
Let me start by saying that the last question you posted (and referenced here) is a remarkable example of how to use this platform, and it looks like you are keeping it up (maybe consider blogging as well, we could all benefit from it, I believe).
Now to the matter at hand, as I second Matthew Billingham point of the importance of code review, I would also add that you need to put in place some "ego canceling" method (e.g. someone that code review is its only job and that every remark is being handled professionally and with reasoning behind it to justify it).
As for confirming your dev. guidelines, here are some practical ways to enforce it (some I saw in previous working places and some are a figment of my imagination, but, I think, doable):
This list is, of course, incomplete and is adding (sometimes even in a tangent) to other points that were already made.
As for adhering to the guidelines, apart from what was already been said I would add, choose the best and brightest people you can, to begin with, insist on good programmers and demand the ones you established rapport with - you are paying for it after all. Have a longer process of interviewing/allowing a contractor to work with you. This advice is a cliche by now for a reason :)
Reminds me of a code jam we did before TechEd a few years back...
It completely failed when we went to demo it, but... The answer you seek is gamification!
Badges will surely help here... You can get badges for completing a review, finding different types of errors, responding to review comments... Ah the possibilities are endless!
Hook it in to the branch merge (pull request in GitHub) functionality of ABAP git, and we could have an ABAP development environment that might even be fun!
Unfortunately seems to be the sort of thing that only gets addressed in demojams.
Good luck!
Chris
Hello everyone,
I see the challenges Bärbel mentioned as well in my organisation.
One of the key points is that you HAVE the funding of the program lead to "force" these rules an regulations, but here is where it comes short from time to time. People / Consultants / Contracors state: "this will have impact on the time line". Same discussion as with #unitTests. Yes, in the beginning ... But often, this is not accepted by management. ( with short term look )
Please: @Management: Let those geeks do their work an care for quality, similarity, and code perfection! it will pay out over the runtime of the system.
My Devices to overcome the mentioned hurdles are:
- Print some exemplars of you dev. rules and hand them out to every new developer.
- try to talk to them and ask them about their opinion to those rules and if they have advice, ( after they read them )
- use ATC & ABAP Open checks on a daily basis an talk about the result in dev. team meetings
- review your guidelines: not to overfloat them and rewrite an ABAP Book.
- offer information / training even to externals consultants how to use ATC to simplify their everyday work.
- try not to have so called "developing consultants" ( Consultants who learned to type ABAP, but do not have clue about Software development ) ( Goog key question is: What would you use an interface for? )
Barbel,
I am answering your question assuming your emphasis is more on how to make the developer adhere to Dev guidelines.
Then,the only way is
1. Strictly enforcing code review and then let the developers take the corrective action on the review comments.
2. Only after Step 1 is done, approve the changes to be moved to QA.
K.Kiran.
Another approach would be to use a BAdI upon Request / Release of Transport, with some kind of Checklist (ticking the Boxes for 'having done a Unit-test' and 'Read and Conformed to the Coding Guidelines' as an example) preventing Code from being "Released in the Wild" ...
By storing and reporting on these indicators (through ad hoc Peer Review?) you might have something 'in writing' as a Point of Discussion or Reprimanding the Developer ...
Just to be clear : we haven't implemented anything like this at my Company, but I have thought about something like this in the past. Inspiration was gathered through this Blog : link
So even though this is an "after the facts" -approach, it could atleast make the general Compliance with your Coding Guidelines and Culture visible?
First of all, very interesting point and discussion, thanks for starting it. We've had the same questions over the last years and were facing the same problem of how to enforce this and how to have enough (internal) capacity for code reviews, especially in larger projects. And we've definitely not solved the issue, especially the process one, however one thing that already helps to a certain extent is the following technical piece:
So far our experience with this is quite positive and we will definitely continue on this route.
Cheers,
Alex
We are doing the following to help with this issue as follows:
1 - We have a formal standards document
2 - Developers are required to sign a statement saying they have read the formal standards document and will follow it
3 - All code is code reviewed. Included as part of that is the mandatory use of the code inspector with agreed upon categories that we have documented. If the code fails the code inspector then that is marked up as such as well.
4 - We have a spreadsheet where we track the result of all code reviews and we also include major categories as to why it failed and we review that data at every meeting. The spreadsheet is viewable by all but only modifiable by the code reviewers.
5 - If all else fails, we have terminated a few people who are consistent repeat offenders or who we have felt isn't cutting it.
Note - Sometimes the reviewer makes mistakes too (never happens! - lol). If the reviewer makes a mistake, here is what I recommend to do:
1 - Admit you made the mistake to the person in an email. If applicable, let the entire team know.
2 - Correct the data you have in the scoring system you are using (in my case, it's updating the spreadsheet that the initial review passed) and let that person know it was corrected. It will make them feel a lot better if you admit you made a mistake to them and that you corrected the data.
We have added things to our code review process over the years as follows:
1 - Use of the code inspector
2 - Also checking to make sure functional spec was updated with technical details by developer if a functional spec was written for the change
3 - We have code reviews enforced by Charm. A piece of code cannot get from development to QA unless a code reviewer has reviewed it and marked it as approved.
4 - Track code review failure data with a group readable spreadsheet
Code Reviewer code of conduct:
1 - Do not make it personal. Everyone makes mistakes or forgets something once in a while. This is not a process to take someone to the whipping shed and especially for something minor,
2 - If you bring an issue to the team where there is a pattern, there is little to no need to show who created or caused the issue. When I present issues that I want to point out, I purposely remove any and all identifying information including the name of the program where I found the issue and used as an example. This goes along with point #1 in not making it personal.
3 - Be professional. There is no need to slam someone for something and call them names, just point it out what the issue is and if necessary what pages in the standards it violates. One reason for this is that if the reviewer makes a mistake and isn't nice about it, it really comes back to haunt you if you were wrong and it doesn't make you look good.
4 - Be consistent and firm. If you ding something for one person, you need to ding the same thing for everyone. If you let one person get away with something and someone else got dinged repeatedly, this leads to a very quickly escalating credibility issue.
5 - Have a consistent escalation process for repeat offenders. If a person is repeatedly doing the same thing and getting dinged on it, you can define an escalation process that first brings it to their attention and then if that doesn't fix it you can increase what you do.
These are things I have learned over a career of doing software QA for not only software development at my current company but other software development work as well for other companies including a contracting firm.
Add a comment