Simple Grails Plugin Repository Mirror Script

We needed a local offline repository for Grails plugins. I thought this would be a good chance to contribute a grails application to the community. It turns out there already is one called plugrepo that you can just download and use.

The plugrepo grails application is a very nice looking and if I'd known about it I wouldn't have spent the effort building my own.

What started this is a command line Groovy script I had hooked to a cron job on our Ubuntu Linux Apache server to create a mirror. I'll go ahead and post this here in case someone would rather have a flat Apache served archive... otherwise take a look at plugrepo.

To use the archive setup your grails app with this additional configuration file:

// grails-app/conf/BuildConfig.groovy

The following groovy script is meant to be run on the Ubuntu command line and
populate the /var/www/grails directory as if it were a grails plugin repository.

#!/usr/bin/env groovy
import groovy.xml.MarkupBuilder
* Shawn Hartsock
* This is a quick and dirty Groovy CLI script for creating Grails plugin
* repository mirrors.

def uri = "http://myhostname.test.com/grails"
def destinationRoot = "/var/www/grails"
def version = "grails-1.1.1"
def separator = System.getProperty("file.separator")
def latestOnly = true

if(!args) {
println """

this script is designed to take plugins-list.xml files (as arguments) from
a ~${separator}grails${separator}1.1.1 (or other version) folder and download each
plugin to the $root working directory.

It will then output a modified a $destinationRoot${separator}.plugin-meta${separator}plugins-list.xml
file modified to reflect the new locations of the files.


try {
def destination = destinationRoot + separator + version
new File(destination).mkdir()
def writer = new StringWriter()
def xml = new MarkupBuilder(writer)
List files = []
args.each { arg ->
if(arg == "-a") {
latestOnly = false
if(new File(arg).exists()) {
xml.plugins(revision:2) {
files.each { file ->
process(file, destination, separator, uri + "/" + version, xml, latestOnly)
new File(destinationRoot + separator + ".plugin-meta").mkdir()
def outFile = destinationRoot + separator + ".plugin-meta" + separator + "plugins-list.xml"
def out = new File( outFile )
if(out.exists()) {
if(out.delete()) {
out = new File( outFile )

out << writer.toString()
} catch(Exception ex) {

void process(String fileName, String destination, String separator, String uri, xml, latestOnly) throws Exception {
def text = new File(fileName).text // may throw file access exceptions
def plugins
try {
plugins = new XmlParser().parseText(text)
assert plugins?.plugin.size() > 0
assert plugins.plugin[0].release != null
assert plugins.plugin[0].release.file != null
} catch(java.lang.AssertionError ae) {
throw new Exception("File $fileName does not contain plugin definitions: " + ae.message)
} catch(Exception ex) {
throw new Exception("File $fileName could not be parsed as plugin XML: " + ex.message)
println "${plugins.plugin.size()} plugins found in $fileName "
println "\t"
plugins.plugin.each { p ->
xml.plugin( name:p.'@name', 'latest-release':p.'@latest-release') {
p.release.each { r ->
def get = true
if(latestOnly) get = (r.'@version' == p.'@latest-release')?true:false
if(get) {
def f = download(r.file.text(), destination, separator)
if( f ) {
def url = uri + "/" + f
println "\tregistering:\n\t " + url
release(type:r.'@type',tag:r.'@tag',version:r.'@version') {

def download(String urlString, String dest, String separator) {
// note: this is a web URL so you tokenize on "/" not file.separator
def fileName = urlString.tokenize("/")[-1] // last name on URL
def destFileName = dest + separator + fileName
if( new File(destFileName).exists() ) {
println "\t $destFileName already present in the repository"
return fileName
print "\tdownloading $fileName to $dest directory\t"
def file = new FileOutputStream( destFileName )
def out = new BufferedOutputStream(file)
out << new URL(urlString).openStream()
println "."
return fileName


Apache Ivy: Componentization? What's hot and what's not?

I'm working with Apache Ivy over the next few weeks. The problems I'm trying to solve are around the testability of a J2EE application and its functional decomposition. I have chosen Ivy after an evaluation period due to its simplicity and how easy it is to port Ivy into existing Ant build systems.

In the case of the Grails applications I'm working with this problem domain is all rather straight forward. The Grails applications can be functionally decomposed easily along plugins and modules. They were all developed with a Test Driven Design (TDD) mentality and so have a large suite of test automations around the application. NOTE: I have not yet picked a code-coverage tool for this environment, however, so don't ask about code coverage for now okay... suggestions are welcome.

The problem still lies with the J2EE application. I have several components I can identify that have not changed in years since the original system designers bequeathed the code to their heirs. The original system had no automated testing but had ample "test scripts" which were human driven. In some respects I'm shocked at this approach to testing but I can't say it surprises me.

I've been in work environments that literally employed armies of testers. (No really, it was literally the Army... literally armies of testers.) And while there is a place for this if you can afford it ... doesn't it make sense to focus those human-level testers on testing human-level problems? I'm not talking about getting rid of those armies of testers just focusing them on the most interesting problems, saving their collective power for the big issues.

So, here's my hypothesis about how you should decompose an application already long in development, with no automated test harness so it can be better managed. It's a work in progress so please help me knock off the rough edges or if you think I'm daft... let me know.

I'm operating under the theory that the human testing in fact exercises all relevant existing code as it is compiled and bundled up as a part of the whole system. Any components we identify were in that tested whole. Any components we create will be tested under the unified whole. Therefore any movement of components creates no net change in the whole. This initially appears pointless but positions us to begin creating automated unit and integration tests around the identified components. The outcome creates no user-visible results initially but is incredibly important since it makes adding features much more certain.

The end goal of the introduction of Ivy is to identify stable framework components and put those components under tests that mimic the current human-based test scripts. The end result will be the ability to identify the volatile system components and isolate them for focused testing and design work. This is desirable because you isolate the system's accidental complexity and help keep it away from its intrinsic complexity. You should in the process be able to identify layers of abstraction in the system.

NOTE: An interesting side-effect is that classes designed for use with RMI may not change frequently right now but we have observed that they will "break" backwards compatibility at seemingly random intervals. Since our system is distributed this poses a problem. Decomposing these RMI interfaces and classes into their own Jar (compiling them separately and only on the event that they are changed) means actions that change them thus breaking compatibility between nodes will become very apparent since it will be harder to make the change accidentally.

Working theory: Componentization of large existing systems

I'm thinking that (in the large existing system) you want to only decouple packages of classes from the grand unified build that have little change between revisions. You should be able to identify these "stable components" by creating a "heat map" of the repository and watching the rate-of-change in the change control system. The more frequent the changes the "hotter" the class. The "hotter" the class the closer to the other frequently changing components it should be... ostensibly going under a new round of full tests with them. I would only select the coldest classes to be moved into components for control by Ivy.

I will take these carefully selected classes and move them to a separate module to be built and packaged as a single Jar file. These will be placed in the Enterprise Ivy repository for management by Ivy. At build time the project will download these Jar files, just like other Ivy dependencies, from the Enterprise's Ivy repository.

When the unified whole goes under test part of that whole will be the independent Ivy managed Jars. These Jars can be instrumented during our human-driven tests to see how they are exercised by those armies of humans. With that documentation I can then devise unit and integration tests to reenact those human-level tests on the isolated Jars. That means, the next round of the application's life cycle will have a set of automated tests that document how the system operates.

Once we know what the colder classes do we can begin to formulate a framework based on them. And that begins the first steps to identifying and targeting changes to the system to add to what it can do... or designing a replacement... or designing a new feature. Each time behavior changes in the future it will be more explainable and thus more controllable.

And you start getting there by identifying what's hot and what's not.

Have I gone wrong? Commentary?