Refactoring Legacy Code

From arrizza.org wiki

Jump to navigation Jump to search

Here are posts of some ideas I've had about refactoring legacy code:

I'll be cleaning this up real soon now...

--- I've created a new yahoo group for the book I am writing _Working Effectively with Legacy Code_. Please join if you are interested in the topic and wouldn't mind reading some drafts.

http://groups.yahoo.com/group/welc/

Michael Feathers www.objectmentor.com

--- A reengineering pattern language and handbook are here:

http://www.iam.unibe.ch/~famoos/patterns/ --- +What is more complex to understand a proyect with 100 Clases containing each class 5 methods of 5 Lines or a proyect with 10 classes containing 5 methods of 50 Lines??+

Here's another perspective:

What if each method (no matter what its length in lines) was singular. You looked at the method's name and knew that it did one and only one simple thing. Imagine if every method in the system was this way. Would this make the system easy to understand?

I'd say yes, it would make it much easier to understand. Each method would be easy to understand because it is singular. Sequences of calls to singular methods are easy to understand because the sequences are comprised of singular methods. Given that the sequence is in a singular method, the sequence itself is singular, which again makes it easy to understand.

Now can you write 50 line methods that do one and only one simple thing? I'd say maybe a few but not for an entire non-trivial system.

Given, then, that you have a few 50 line methods and all the rest are very small, say ~5 lines, what is the average number of lines per method across the system? Somewhere just above 5.

Note the same reasoning applies at the class level: imagine that each class in the system groups and organizes a number of these singular methods. And each class is singular itself. That is each class performs the activities related to one and only one thing. Now would the system be easy to understand? I'd say yes.

Again, can you write a class that has 50 methods in it that does one and only one simple thing? I'd say maybe one or two but not for an entire (non-trivial) system. --- Take a look at the latest script of "Working Effectively with Legacy Code": http://groups.yahoo.com/group/refactoring/files/WELCNov272002.pdf --- [ Something I have found useful in dealing with legacy software that I have not written and that is poorly designed is to first expand the notion of what a "unit" is. At first, it might be the whole system. I figure out what its current output is (given a specific input) and write a set of "unit" tests that reliably produce output for a given set of inputs.

Then, I start to work on finding the subsystems, loosening some of the most egregious couplings in order to find somewhat smaller units. I develop tests for these smaller units and then continue on in trying to understand these smaller pieces of code by refactoring. By this process, I find still small units that I can at least sort of understand and test.

I eventually get to a point where I can demonstrate (either for myself or a sponsor) the recoverability or irrecoverability of this code base. If recoverable, my tests form a basis for continued refactoring and for functional enhancement. If not recoverable, I have learned enough about the code, and have harnessed it with tests, so that I can use it as a resource when developing a wholly new version. This is important because as messy as it is, if the legacy system is running (i.e. producing value) there is still a lot to learn from it's code base.

This is not test-driven development per se: it is something else. There's a new book on refactoring legacy systems (whose title escapes me), and then of course there is the paper Michael Feathers that has been posted to this group (http://groups.yahoo.com/group/refactoring/files/WELCNov272002.pdf): both I have found to be helpful resources on this issue.

Michael Hamman

] --- [ First, I'd like to introduce myself. My name is Bill Opdyke; many of you probably noticed my name listed among the contributors to the refactoring book. Prof. Ralph Johnson supervised my research at the University of Illinois which led to my PhD thesis (Refactoring Object-Oriented Frameworks) in 1992. ] --- [ Keith:

Thanks for asking!

If you visit my home page (at the college):

  http://csc.noctrl.edu/f/opdyke/

You'll find a section on "professional/ research interests" / "leveraging and evolving legacy software systems", where I include links to several papers I've written on the topic:

  I was a guest editor on the topic "Evolving Communications Software:
  Techniques and Technologies", in the October, 2001 issue of
  IEEE Communications.
  This paper, which I co-authored, focused on a range of discovery
  cost issues (and, to a lesser extent, interface issues).  The paper
  is titled "Understanding and Addressing the Essential Costs of
  Evolving Systems".
  This paper, "Rapid Development of Converged Services Using APIs",
  focuses on interface issues.  The paper has a heavy telecom/ 
  internet telephony focus; it was based on experiences with legacy
  telecom systems. 
  This is a brief description to a talk I've presented in several
  forums, entitled "If Software is So Easy to Create, Why is it
  So Difficult to Evolve?".  It is targeted for both industry and
  academic audiences.  This (60-90 minute) talk includes both lecture
  material and an audience/ group exercise.  I hope to take it on
  the road to more places in the future.

You'll find some references within those papers.

I'd be happy to talk with you (and others) further on this topic.

One last thing: I am co-organizing an OOPSLA 2002 workshop related to discovery costs; if anyone is interested I can pass along further details...

Bill Opdyke opdyke@acm.org AND opdyke@noctrl.edu

PS Thanks to all who have replied (to date) to my query/ this thread. ] --- [ >>1) Which metrics showed up the most obnoxious code? >I have used a very simple metric that I find works quite well: I look for >the largest method (i.e. number of lines in the method). > >It's not a 100% accurate but given the simplicity of calculating the metric, >it's a pretty good bang for the buck.

In my experience, the most complex method (most if, while, etc. statements) is a better candidate for refactoring. Off course, complex methods are usually large as well. However, some initializing routines (especially found in unit tests) are large but not complex. Examples are routines that generate a test input file or fill a temporary database with some SQL statements. These routines can theoretically be split, but that does not make the code any clearer.

Best regards Willem Bogaerts Cardialysis ] --- [ > > >2) What was your general impression of this way of selecting > > > refactoring targets? > > > > Not bad, except I wish I could correlate the number of lines in a method > > with some metric indicating the "importance" of the method, e.g. how often > > the method is run. There's no point in refactoring something that isn't run > > (which makes me wish for a dead code detector). > > What is the point in refactoring a method that is run frequently, but never > has to be touched by a programmer? If there were no point in doing that, > wouldn't it be totally sufficient to spot a code smell as recently as you > had to work on that code? > > Curious, Ilja

True, the frequency of a procedure does not add to my understanding (like shouting doesn't help if I don't understand something in a foreign language). However, if it is hardly called, I want to know if it is called at all. So a dead code detector is of great help there. I understand the code a lot better if all unused garbage is deleted.

The frequency is nice to know if you want to optimize your code for speed or memory usage. There are profilers that monitor your code in that respect.

Regards, Willem Bogaerts Cardialysis ] --- [ From: "Willem Bogaerts" <WBogaerts@Cardialysis.nl> >Of course, complex methods are usually large as well. Large methods have a high correlation to a lot of different kinds of stink. I'll grant that the correlation isn't 1.0.

I think metrics will always be assistive mechanisms, not absolute determinates. This, I think, implies a metric doesn't have to be perfectly accurate, just that it have a high correlation with "smell". ] --- [ >From: "Willem Bogaerts" <WBogaerts@Cardialysis.nl> >True, the frequency of a procedure does not add to my understanding

OK I see your point. I am interested in getting the most benefit for my refactoring effort. Is there a better metric for discovering "important" methods?

I am assuming that to maximize overall benefit that a good strategy is to refactor important methods first, rather than unimportant methods. One measure of "importance" is the method is a central entity at run-time: it is called often, or it is called once but it's results are crucial to other object's functioning, or ... what? ] --- [ >From: Greg Vaughn <gvaughn@delphis.com> >I think the real question here is should we refactor from a top down or >bottom up approach. I've never had the opportunity to do a top down >refactoring, where the boss says, "sure take a few months to improve >the code without offering any change to the end user".

Neither have I, but I've been given large chunks of code as my responsibility (in a non-XP shop). I fix the bugs/add a feature in the way you've described and then look for the worst smells. I refactor one smell, perhaps two, taking a a couple of hours to a half day to do it and release the code to QA. Over the course of a relatively short time, the code becomes quite stable but only if you choose the right methods to clean up. ] --- [ >From: Shimon Rura <shimon@rura.org> >This exceedingly strange idea occured to me yesterday: if you >are forced to keep your units of abstraction (classes, methods, >etc.) small, you will be pressured to break down their complexity into >other units of abstraction.

This is a good idea, but the caveats of other responders to your posts should be heavily considered. You said "you will be pressured" not "you shall". In other words, you are trying to provide a motivation to do the right thing. As others have said, though, it does not /force/ anyone to write good code. But as a first step towards merciless refactoring and good OO, I'd say it goes in the right direction.

>What do you think: excessive and heavyhanded, or potentially useful?

All of this depends on your team and their individual personalities. If they are not responding to OO design rules as they should, then this might help them. If they have understood good OO rules and are rejecting them for some reason then, setting metrics limits probably won't help at all.


>1. No class may require more than 200 lines of code. >2. No method may require more than 40 lines of code.

I'd change this a little: the average number of statements per method is less than N.

Taking an average across all classes allows a few methods to be slightly bigger but makes the vast majority smaller to compensate.

Counting "statements" means exclude blank lines, comments, the method signature, braces lines, etc. i.e. just the code inside the method. The simplest thing is use your favorite metric tool and stick with it for consistency. Compare across applications and subsystems.

As to N: choose a piece of representative code from your app. Refactor mercilessly. Determine post-N. Run a metrics tool over the whole app. Refactor until the whole app is at post-N.

My experience has been (with a tool called SourceMonitor): for Java & C++ without exceptions, N is around 4 to 6. For Java & C++ with exceptions, N is a bit bigger at 7 or 8. For C++ COM/ATL N is around 10 - 15. (no EJB's in any of the Java.)

The "refactor mercilessly" step is a problem. If you haven't ever refactored mercilessly, then how do you know you're done? One trick I've used is to refactor until N is between 1 and 3. That is, I took the "merciless" part and went extreme on it.

I took a small utility app, originally around 2000 lines or so (IIRC) and refactored it mercilessly. I ran the code counter over it every so often. When I finished, N was around 3. It's not possible to get to a 1 line average, but it is possible to get close. You can still have 10 - 20 line methods but that just puts a lot of pressure to make /all/ the remaining methods 1 line each.

I did a lot of mechanical refactorings at first. Examples: the 'if' block and the 'else' block were extracted into a method, the body of for and while loops, were automatically extracted. If I saw two or three lines grouped together with a comment heading:

 ...
 stmtx;
 //do a thing
 stmt1;
 stmt2;
 stmty;

the comment became the method name, the statements became the method body. Eventually I was extracting "any statements that belonged together in any way".

I ended up with many, very small, mostly private methods, as expected. With a little inspection, I found the methods were partitionable. I extracted those methods into new classes. The old methods ended up with 1 or 2 lines in it. The new classes had almost all public methods and only 1 or 2 lines in each method.

I began keeping track of average number of methods per class (because the tool had it) but I just didn't consider it important. I'll have to try the experiment again some day.

I did the partitioning based on criteria like private methods that were used by only one public method, all methods that operated on one or two member variables, methods with the same set of parameters, methods that use the same OS API's, methods that "did the same sort of thing", etc. In general, any traits that two or more methods had in common, I tried to use as partitioning criteria.

Eventually I ended up with many, very small classes, with almost all public methods. An interesting observation is most of the classes had become "atomic", they were not partitionable in any way and that it was easy to see that they were "atomic". Their purpose -- the class's responsibility -- was very simple, easy to state, and didn't involve words like "and" or "or" in them. And I could verify that there weren't extraneous methods, they pretty much stuck out like a sore thumb. It's harder to find missing methods, but still possible.

I took those classes that weren't atomic or singular and tried to split them as well. At that time, I was afraid to make classes with only one or two methods in them, so I didn't succeed in extracting out some classes as I should have, but I did manage to split a few more. If I had been more familiar with Design Patterns, I perhaps could have split up a few more classes.

As an aside, the hard part was (is!) in naming the new classes. But an interesting side effect is it forced me to think about what the class was doing or trying to do... which led me to juggle some methods around.

I did this to learn what /merciless/ meant, I don't generally do this in production code unless I happen to have extra time or there are good maintenance reasons to go that far.

I wish I could say that I had learned everything about good OO and refactoring from this excercise, but I didn't. There's more to OO than this, but the experiment got me a lot farther than I had been prior.

John ]

Personal tools