Hibernate3 is not EJB3

My first and most painful lesson in my year with J5EE was about Hibernate, Hibernate3, annotations, and EJB3. I had been lead to believe by an evil leprechaun that EJB3 was just Hibernate. I learned by actual experience in creating applications that this is simply not true. In fact EJB3 is a standard that Hibernate3 implements.

Why is that an important distinction?

Because EJB3 features are a proper subset of Hibernate3 features since Hibernate "embraces and extends" the EJB3 standard. It also means that if you do intend on using those cute little "at" signs in a "portable" way you'll have to figure out what EJB3 annotation to use to solve the Hibernate trace you just saw.

EJB3 doesn't require XML sit-ups and the EJB3 implementation in JBoss that sits on top of the Hibernate3 layer hides those gut busting crunches from your beady little eyes. So using the EJB3 persistence feature is far less painful than using the Hibernate 2 XML workout routine. That would typically mean that you'd want to use the EJB3 model whenever possible.

The problem is that Annotations for persistence and XML for persistence can get into fistycuffs. If you set up a Hibernate SessionFactory (even if it's a version 3 session factory) you may use XML docs _or_ annotations... theory may say otherwise but when I sat down to mix my annotated classes with XML configured classes the programmatic changes to the configuration wouldn't stick.

Hibernate3 is not EJB3.


A year with Java 5 EE

I've spent the last year with Java 5 Enterprise Edition betas and I'm going to talk about my experiences with the Java environment as a relative newcomer to the Java landscape. My background is in C/C++ System V programming on Linux, Solaris, and Irix systems. It has been a hard year but one that I'm thankful for. I've learned a lot about how Service Oriented Architectures go together, why they work, and why they fail.


Politics in Software

The use of which software technology to do which job is actually a trick of politics. The truth is that any Turing Complete language can naturally do the same job that any other Turing complete language can do. This means that the choice of a particular technology or suite of technologies is not based on some formal rigor but on some politically motivated choice.

Developers want portable skills, so they have the security of a selection of employers. Businesses want easily replaceable employees for various reasons including keeping wages lower. The result is that the market will likely force a single programming language to the fore-front even if it doesn't deserve this placement on technical merit.

The similar forces work on operating systems and hardware too. The push for commodity parts forces us into architectures like the PC and it's x86 instruction set. Today there are no real competitors to PC architecture. There are no real competitors for the desktop metaphor either.

Even if I mention Linux on the desktop I'm not really mentioning anything all that radical. Even Linux bends to a windows-esque operating metaphor. When GUI designers make GUI for Linux, the go back to windows-keyboard-mouse (can I call it "wikemo" and not get snickered at?)

That mono-culture means that the window-keyboard-mouse paradigm is even further locked in. But so are other paradigms that are equally arbitrary and equally locked into the computing psyche. The questions that need to be asked as we start to see more and more internet enabled devices are:

* Do these paradigms work for us still?
* Is there a better way to represent program structure for these devices?
* Is the better way worth the effort of changing everyone's mind?


Space-time tradeoff

My half of a chat conversation:

"In computer science, a space-time or time-memory tradeoff is a situation where the memory use can be reduced at the cost of slower program execution, or vice versa, the computation time can be reduced at the cost of increased memory use."
From: http://en.wikipedia.org/wiki/Space-time_tradeoff

... so in a very general and theoretical sense... if there was enough ram on your system, your OS should put everything in RAM to make things faster... not just because Hard Drives are slow... but because it is faster to look up the answer to 32 billion divided by 8 than it is to compute 32 billion divided by 8. The more time it takes to compute an answer the smarter it is to save that answer so you can look it up later.

More RAM makes software many many times faster.

However, Database design teaches the exact opposite: only save that which you can't compute, and save that information in the most terse compact form possible. Which is why database engines tend to do a lot of on-the-fly computation, caching, and indexing. The reason for this is that Databases are concerned with consistency and accuracy of data and that is easier to maintain for terse, tight, inter-related groupings of data without any redundancy.


JAX-RPC is killin' me.

No collections?!? WTF! It seems that the JBossWS product put out by JBoss does not allow for collections over the wire as SOAP... yet my XJC tool will turn an DTO into POJOs using java.util collections. What gives!

We live in the 21st century, application developers should have dynamic data structures like double ended queues, vectors, and hash-maps to work with. If you can't do a container then your serialization program just isn't ready for prime time.

Damn it. Now I'm going to have to pour through all that autogen code and rewrite it to use plain arrays. I wonder how Hibernate will feel about this.


jBPM, EJB3 Persistence, Hibernate....

I think I'm going to go nuts. jBPM, EJB3 Persistence, and Hibernate... all playing nice-nice together? Is it possible? Apparently not.



I'm developing a project in jBPM, EJB3, and SOAP and I haven't checked my email in weeks. I haven't surfed for just as long. I'm in the Zone but so intently I've zoned out.

And it seems that this is a prevalent model for developers to follow. It seems that learning a new technology stack requires weeks of hyper-focus. I know I'm not alone in this.


Technology and Society

The Awesome Power of Spare Cycles, Chris Anderson explains how Open Source, You Tube, and My Space are all powered by spare cycles. In reality all of society from art, music, and literature all the way to science and politics are built on spare cycles.

In your high school or college sociology class you might have learned that societies are created on the surplus food that a group of humans can create. In other words you don't get tributes to Zeus until there is a surplus of food lying around that the peasants won't mind parting with. The arts, religion, politics, and kingdoms all come from the ready supply of extra food.

If you want more open source, then create an environment where more people can take the risk of creating open source projects and even potentially waste their time on them. Consider that most projects fail. Most projects do not become popular. There must be enough surplus developer time to support those risks so that the one lucky project that changes everything has the chance to get created and have a few people waste their time on it before it becomes a product.

My company has taken a risk on me. They believe that the can sink me and some capital investment into my projects and I will create a return on the order of millions of dollars. I'm going to have to go out and make that expectation come true... yet at the same time I need to have permission to fail a few times. If I can't afford to fail... I can't afford to take the big risk that will pay off.


Your Programmable World

Today your car's behaviors are controlled by on-board computers that run firmware that is basically assembly level software written down into the hardware in one fashion or another. You can get your car a software upgrade.

The car is complex and exhibits complex behavior by design. The more complex the behavior the more valuable the ability to program it. My house, for example, has an IP address. I can program my home to keep different temperatures at different levels at different times of the year. The ability to program my house more directly would be valuable yet I can't do that yet.

When we start seeing more and more programmable things in our world we will see the need for more and more programs. We will desire for more and more interconnection of those things. We will need more software. Software becomes something that isn't the domain of a server, it becomes embedded in our world.

Where would more intelligence make sense? Do I need a smart coffee maker? How would I send an upgrade to it if I had a smart coffee maker?

I want to start playing with these ideas. To do that I need something like a pico computer that I can program with existing Open Source software. A computer with wifi that I can stuff in the sole of a shoe and won't mind getting sweaty. Anybody selling one of those?


helper helpers

While trying to solve a problem in code you go about finding the generic, abstract, or reusable and refactoring it out into units that can be adapted to a number of similar tasks using configuration or minimal programming. In a language like Java this is usually about finding parent classes and helper methods... or even aspects.

For example, you pull out a private helper method out of a public method. The private helper takes three parameters while the public method only takes two. Now you can create new public method that only has two parameters and uses the private helper with a different third parameter. The benefit is that your second method only took one line of code and you saved yourself from cut and paste hell.

When does this stop?

When it's appropriate to stop. At some point the helper helper helper pushes the code's function so far down that you can't tell what the public method is supposed to do... or you end up with hundreds of one parameter methods that call two parameter methods that call three parameter methods. At this point abstraction stops helping and starts hurting the cause. Too much abstraction simply becomes obfuscation.