I just spent the last week trying to work with a legacy system component that was implemented using the Spring framework. This component read data from a database into a Lucene index wrapped by Compass. At the time of implementation, the lead engineer was using JPA, to load database records into POJOs, which he then annotated so that they could be serialized via JAXB, which enabled Compass to read them in as Lucene Documents. Whew!
Because time was limited and the code was already in production, I decided to ignore my fundamental misgivings about frameworks and Java Acronyms, and make the minimal modifications to the existing source that would get it to take input from S3 instead of a database.
After a day of struggle, I had figured out what was going on, and was astounded by the amount of code required just to set up the relatively simple business logic. When I hit a 'schema not found' error trying to load the application.xml, I gave up, ripped out the business logic, and re-implemented the entire thing in a matter of hours. With a lot less code. I know that the original implementation of the Spring based code took a week or so to write.
The massive increase in efficiency is not because I'm a brilliant coder. I wish I was, but I've worked with brilliant coders and I'm not one of them. It's because the actual business logic was pretty minimal. The logic required to implement and maintain the Spring application required a lot of code that could only be described as Ceremonial, as opposed to Essential business logic. I first read about Ceremonial vs Essential code here, the night after I had exorcised Spring from the logic. The timing couldn't have been more appropriate.
What is Ceremonial code? It is code that has nothing to do with implementing a business requirement. In Spring, I define Ceremonial code as:
- Configuration as code
- Dependency Injection
- (Pedantic use of) Interfaces
Configuration As Code
Separating configuration into a data file is inherently a good idea. You don't want to hardcode variables that you would then have to rebuild the application to change. I'm not sure how this basically sound idea warped into "hey, let's put EVERYTHING into configuration", but the biggest problem with this approach is that now part of the logic is in code, the other part is in a massive XML file. You need both to understand the control flow of the application, so you spend a lot of time toggling back and forth, saying "What class is being called? Oh, let me check in xml configuration. Oh, that's the class. Great. What was I doing?" Maybe some people see this kind of rapid mental stack management as interesting and novel brain training. I see it as wasting time, time that I could be spending either coding or testing a feature that someone is paying me to implement.
Dependency Injection
This too, starts off as a great idea. One of the big issues people (OK, I'm talking about myself, but using the word 'people' to get some more legitimacy) had with EJB 2.0 code was that it was really hard to test. You had to have the whole stack up and running, and validating the integrity of different components in the stack was so hard we just didnt do it.
Dependency Injection/Inversion of Control allows you to parameterize the components of an object, making that object really easy to test. Just expose components with getters and setters, and you can dummy them up and test that object in true isolation! Again, there is still nothing really flawed at this point.
The major flaw in Dependency Injection comes at implementation. Objects need all of their components in a known, initialized state, in order to function effectively. Dependency Injection as implemented in Spring is usually done in the configuration file. Objects that are created in the configuration file have all of their components set in their configuration.
It is very easy to miss setting a component in the configuration file. This means that the object will initialize in a bad state that becomes apparent when you try to use it. People use constructors because they can specify components as parameters to the constructor, which is an explicit way of saying "This component needs components X, Y, and Z to run".
Using a constructor provides a foolproof way to successfully initialize an object without having to test for initialization success. If the constructor returns, you're good. If not, you know that the object is not usable.
In order to be able to be configurable via Spring, objects must (a) have a default (no argument) public constructor and expose all of their required components via setters. There is no way to enforce that setup has been correct, so the developer has to spend time looking at the getters and setters of the object to determine what components they need to supply at configuration time. When I compare that effort to the effort of looking at the constructor parameters, it feels very inefficient.
Pedantic Use of Interfaces
The goal of the Java Interface is (a) separate functionality from initialization, and (b) provide a contract that a caller and callee can communicate across. This makes sense in the following two cases:
- You have a complex object and you only want to expose a part of it to a caller. For example you have a parent class and you want to expose a callback interface to the child class.
- You have multiple implementations of the same functionality and you don't want the caller to care about which object they are calling.
What I see all over Java-Land, and especially in Spring, is interfaces being used because someone got pedantic about separating functionality from initialization. I fail to see the use of an interface when used to abstract the implementation of all of the methods of a single class. You're writing two structures when one could do the job just fine. Actually, you end up writing three structures: the interface, the implementation, and a factory object, which is more ceremonial code. Even if you need the interface, you could still have the implementation object return an instance of itself cast to the interface via a static initialization method:
public class S3AccessorImpl implements S3Accessor { private static final int DEFAULT_SET_SIZE = 1000; private S3Service service; private Logger logger; public static S3Accessor getInstance(AWSCredentials creds) throws S3ServiceException { return new S3AccessorImpl(creds); } protected S3AccessorImpl(AWSCredentials creds) throws S3ServiceException { logger = Logger.getLogger(this.getClass()); service = new RestS3Service(creds); } ... }
In spite of my comments above, I am a fan of using interfaces as the boundaries between an components because it facilitates easier unit testing. But I'm not entirely sold on abstracting the creation of an object to a Factory that returns the interface that object implements -- not when the above method (a) hides creation from the caller and (b) doesn't require an extra class with a single 'createFoo' method.
Also, I don't understand always writing interfaces first, then implementation classes second. I tend to implement classes until I have a real need for an interface, i.e. during unit testing when I am going to submit a 'dummy' component in place of a real one.
Conclusion
My recent experience with Spring has reminded me of the existence of 'Framework Debt'. Framework Debt is the Technical Debt required to implement a solution with a given Framework. In short it is determined by the ratio of time spent writing and maintaining ceremonial code vs the amount of time spent writing and maintaining essential business code. The problem with most frameworks, Spring included, is that they do not distinguish between ceremonial and essential code, because to them, it's _all_ essential code. And, to work in that particular framework, ceremonial code is absolutely essential, and having to maintain and understand a bunch of logic that has nothing to do with application functionality seems inherently wrong to me.
I actually do like some frameworks I've run into. Rails is great because of it's 'convention over configuration', but that is another kind of technical debt. Fortunately it is pretty low in Rails, and as a result applications can be rapidly developed in Rails without losing maintainability. But even Rails feels too heavy for me at times. I do write apps that don't need the overhead of MVC. For these apps, Sinatra allows me to quickly get path routing out of the way and concentrate on the underlying code.
Great post Arun, thanks for sharing!
ReplyDeleteNice post... some comments:
ReplyDelete1. Most objects that you put in the spring application context are usually singleton services... so IMO not being able to explicitly call the constructor isn't a huge deal.
2. Spring javaconfig is a better way to do DI wiring... instead of XML you essentially have Java classes that look like factories. It's typesafe, refactor-friendly and familiar.
3. Overall, if it only took you a few hours to rewrite the app in plain java, then Spring is definitely overkill :-) There is overhead to using DI, but it usually pays for itself as the project grows...
Thanks for reading! My responses to your numbered points below:
ReplyDelete(1) Actually, the objects I was dealing with in spring application context were not singletons, they were components that were instantiated multiple times. This made following logic hard w/o the debugger b/c I had to keep switching back to application.xml to figure out where the actual code lived.
I can see your point about singletons -- sort of. Without explicit ctors, DI exposes the developer to a potentially partially initialized object, which means that the developer has to know how the object is initialized, which invalidates the whole principle of using an interface, right?
(2) Spring javaconfig sounds a lot easier to follow, especially for runtime components like I was dealing with.
(3) I completely get the intentions of DI, there is a lot of boilerplate functionality you want your services to have, and it makes sense to configure the wiring when a sizable percentage of your code is for configuring/initializing the objects that you will use.
However I think there is a tendency for developers to do things because they are possible, and all frameworks are potential 'golden hammers' with potentially bad consequences if they are used without thinking through the problem domain. I've seen too many instances where an otherwise smart developer makes up front framework choices about the solution to an incompletely understood problem domain, and the framework ends up being a burden instead of a productivity lever.
Hi Arun,
ReplyDeleteI totally agree with the golden hammer issue. Though, I wouldn't mind if my problem was a golden nail :-). I think you can extend that issue beyond frameworks to actual languages and computing paradigms (cloud, distributed computing, MapReduce, etc.).
As to my point about singletons in the app context... I meant to say that if the component is instantiated multiple times, maybe it shouldn't be a DI managed bean. There are only a few cases where it would make sense.
I just reread a part of the post... in fact you can use constructor based injection in Spring and do away with the setters. As you said, this is better because you either get an exception or you have a properly instantiated object.
It's nice to know that you can do ctor based injection in Spring. Combined with your points about programmatic configuration and the proper use of DI, that takes away all of my major gripes :) Thanks!
ReplyDeleteI'm left wondering why the original developer wrote the code this way, because these weaknesses are evident upon a fairly cursory inspection.
I completely agree with your point about Golden Hammer being applicable to languages and computing paradigms. I was in a situation last month where MapReduce was being rammed down my throat as the solution to a problem that had a real time delivery requirement... despite the fact that MapReduce is by definition a two step batch process. And I've heard more than once that 'Ruby is better than Java', or vice versa.
All of these arguments manage to sidestep our responsibility as engineers to choose the right tool for the situation after analyzing the requirements of that situation. One size can _never_ fit every possible problem statement.
This comment has been removed by a blog administrator.
ReplyDeleteThis comment has been removed by a blog administrator.
ReplyDelete