Oct 142011
 

David Lloyd and I are working on getting some standards together for dependency management. We are hoping that this will help unify projects and provide the ability for all the different projects to use the same repositories.

Please join us for the discussion if you have ideas you want to share.

Here are the details:

10am PDT October 21
##java-modularity on irc.freenode.net

Nov 142008
 

Been working this week on Savant 2.0. My first thought when I started writing Savant 2.0 was to write a complete replacement for both Maven and Ant that used Groovy and allowed for both a plugin model as well as a simple build script approach. This was too much to bite off when you consider all of the other changes to the dependency management part of Savant we made for 2.0.

For JCatapult we created a set of Ant build scripts that could be plugged into any build file and reused. It looks like this:

This model meant that the build.xml file had nothing but import statements in it. It made life much simpler when working with numerous projects that were virtually the same.

The part I didn’t like was that I loved this model. I liked it so much I started using them for everything including Java.net Commons, Savant and internal Inversoft projects. This meant that I had a dependency on JCatapult, which for all intents and purposes is a webapp platform. This seemed strange.

I started thinking about moving these files over to Savant. At first I just figured I would migrate the entire set of Ant plugins from JCatapult over and be done. However, once I moved them over it meant that I would need to start pulling in plugins for a variety of projects. Some would come from Savant and others might come from JCatapult or elsewhere. This was due to the fact that some of the JCatapult plugins were very specific to JCatapult projects. I could force developers to download each plugin they needed from various places, install them in a specific directory and then update their build files before they could build any projects. It all started to look very clunky.

Then I remembered (duh!) that Savant is a dependency management tool and it can find, version and download ANYTHING. Why not apply this pattern to Ant build files in the form of plugins?

So, I did just that!

It works great. The way it works is that I wrote a simple Java class that figures everything out and then generates an ant build file. The full process looks like this:

  1. New script called svnt invokes a Savant class
  2. This new class parses the project.xml file, which defines the plugins the project will be using
  3. It downloads any missing plugins to the Savant local cache
  4. Since the plugins are JAR files, it explodes the JAR file to ~/.savant/plugins under a specific directory just for that plugin
  5. This class then generates a temporary build.xml file that imports all of the plugins the project needs
  6. It outputs the location of this temporary build.xml file
  7. The svnt script executes Ant passing it the location of this build file (i.e. ant -f /tmp/tmpbuild.xml)

And that’s it. Nothing fancy, just a little sugar on top of Ant to provide downloading of plugins and build files automatically.

The last part of this exercise is to write a bunch of plugins. Since these are just Ant build scripts and we’ve all created hundreds of those so it should be a simple matter of tweaking them ever so slightly to work when imported.

My new projects don’t have a build.xml file (but if they do it is imported to add additional targets or override targets from the plugins) and just contain the Savant project.xml file. This file now defines plugins and it looks like this:

That’s it. If anyone is interested in trying it out, I should have a release available in the next few days along with some documentation for the 5-10 plugins I will have ready. If you want to try and build all this by hand right now, you’ll need to install JCatapult since Savant uses that until it can be self building.

Aug 102006
 

Dan Moore posted about my Google Projects idea and apparently Google is already doing this. It’s called Google Code and is available at http://code.google.com.

Naturally I needed to see what they had done with it. First off they decided not to tag it as beta like they do so many other things they release. Here’s the logo without the beta:

Google Code Logo

Second, this product is really more like alpha, but has some good stuff. Here’s what I did:

  1. Logged into my gmail account
  2. Hit Google Code
  3. Tried to make a project named “Bluprints”
  4. Google code told me this was a SourceForge project and sent an email to that project owner to ask if I could use the same name. Since I’m also the source forge owner, I allowed it (this time)
  5. Unfortunately after I agreed to release the name I had to re-enter the project info into Google Code. This sucked and they need to fix that issue.
  6. I re-entered everything and went to the project homepage.

Okay, now here’s what they got:

  • Project home page with nothing on it and not much control. You can add links to other project pages, blogs, etc.
  • Issue tracking
  • SubVersion
  • And minimal admin for the content

This isn’t bad for a first release. I do like that you can tag everything, even bugs, for easy searching. This is really going to help the open source search that Dan has been talking about. I also love the simple design that shows you the information you need without any crap like ads or the like.

As far as I could tell you can only use SubVersion and issue tracking and nothing else. For the most part this is okay because you can easily link to a Google Group for mailing list and discussion, and you could link to a Blogger account for blogging. No wiki or forums yet either.

The largest missing feature that makes this unusable is you cannot release project files unless they are on other servers somewhere and you put a link to them in your project description. This is really a deal breaker for me. In fact unless they step up to the plate and offer me a way to release files on Google servers using a simple API (so I can do it from Ant/Maven/Savant), it just another project hosting site and I probably wouldn’t invest the effort in moving my projects over. I’ll definitely send them this link and hopefully they can add these features. If they do I can guarentee everyone you’ll be downloading Savant, Bluprints, and Verge from Google in the future.

Jul 062006
 

My buddy Dave Thomas has an interesting blog entry concerning convention vs. configuration. I’ve given it a considerable amount of thought today because it is something that hits you square in the face when switching between frameworks and languages (i.e. WebWork to Rails and back each day).

Both have upsides and downsides and nothing seems to really bring one out as a front runner. Dave’s point about ramp up time is completely accurate. But those configuration nightmares also have that same issue right? No one is really going to memorize what a WebWork validator XML file looks like. Who wants to do that? But they also suffer from double work. The work of coding and the work of configuring, which makes it difficult to compare with convention based programming where configuration is removed.

The major problem I see with convention based programming is always constraints. Of course Rails and Ruby allow you a lot of freedom to modify classes and intercept method calls and stuff like that, which alleviate some of the constraints, but there are constraints. For example, in Rails it is difficult to externalize an action chain and reuse actions for multiple events. Something like this:

If each of the steps are atomic units of work and simply state their outcome as success or failure, this makes sense. You can reuse three because it doesn’t know who it chains to. The chaining is externalized into some XML configuration somewhere. You can pull this off in Rails using YAML and some hacking around inside your controllers, but it can get ugly quick. In something like WebWork, this can be handled entirely in the XML configuration without the code changing at all.

Dave’s second point about maintenance seems to me to be an issue with language rather than framework. Creating crazy method-missing handlers and classes that are defined in 15 files are features of languages and this is where I see the cost of maintenance is incurred. Rails itself doesn’t really let you tweak too much. It seems to me that Ruby let’s you tweak just about anything. Of course, not being an expert in either, I won’t speak definitively about that.

Dave’s last point about Perl I’ve kinda covered already. This happens with configuration and convention. However, with IntelliJ and configuration, you get some pretty good help and code completion that can really make the re-learning much faster. You still have to recall how to code on the configuration side as well as the convention side and nothing can really make that less teadeous and error prone except documentation and good exception messages.

I’m not certain that I fully grok meta-frameworks per se. But I will say this, I have developed systems that are both configuration and convention and they always seem to make me happy. To this day I’m still floored at how few frameworks do this and even more floored and how few people have used mine even if they realize that it supports both styles. The JavaLobby link on Dave’s blog entry covers this a little bit but I think folks still lack the understanding of configurable/overridable conventions.

Take for example the Savant 2.0 alpha I released. This system I love for its use of convention and configuration. If you want to get up and running fast, you can run Savant to create a project layout for you (or just follow the convention for layouts and build one yourself) and that’s it. You can now build, clean, jar, deploy, whatever. The standard layout (the convention) is like this:

Its simple, straight-forward and works with absolutely NO configuration. This is where it gets good though (and personally I feel really Savant excels past Maven and Ant). Let’s say you can’t use the convention or need to tweak it. Let’s say you have this project layout:

Now the Maven folks will say, “that’s three projects with separate JARs” and perhaps it is, but you can’t change it right now or you can’t change it ever and really would like a nice build system to help you out. Well, Savant to the rescue:

This configuration overrides the default project structure and allows you to define as many source directories, build directories, test directories and JAR files as necessary. It even lets you define which JDK to use to compile each one if you want.

The moral of my story is that I think frameworks should always work based on convention right out of the box with as little configuration as humanly possible. Then if you want to tweak, slap down some configuration and go to town! Perhaps this is the meta-framework that Dave is talking about or perhaps not. Either way, this to me seems like a good solution that reduces the overhead as much as possible while still allowing the flexibility to tackle tough situations and problems.

Jan 162006
 

I just finished a large refactoring of the Savant dialect code in order to add support for dialect dependencies. Since it is always ideal to break down pieces of functionality into logical units and then introduce well defined dependencies, I went ahead and did that with Savant dialects.

I was thinking of using traditional getter/setter or constructor DI for this thing, but the more that I thought about it, the more it didn’t seem to fit. In order for those types of systems to work you need a rather complex configuration and usually some id mechanism to identify dependencies. You can also use type DI that does all of the work based on the types of parameters passed to constructors and JavaBean properties. Dialects shouldn’t be required to define dependencies on a specific class since they might not have that class in their classpath. Likewise, a large configuration file seemed to be messy and would really clutter things up (although it might work in the future if I need to give that a shot).

Instead, I added a method that takes a Map, which contains the dependencies to the Dialect interface. This seems best and most practical for now. Now I just have to tackle the versioning problem and then get back to writing dialects. Hopefully now with all this refactoring and dialect dependencies support, new dialects will be much easier to unit test.