Tuesday, January 25, 2011

About the Hudson debacle

There are some big discussions about forking Hudson. Oracle's latest response seems to have triggered this reply from Hudson's main developer Kohsuke, now hired by CloudBees.

Personally, I don't see all that much negativity in the Oracle message. Actually, it seems very well balanced.

One thing I agree entirely: there is no such thing as a project "renaming". We are talking about a fork. Even if the Jenkins fork becomes the one with the biggest market share, it's still a fork.

This reminds me of the EGCS fork of the GCC which became so good it actually became the next 'official' GCC release. There is still a chance this might happen for Hudson/Jenkins so I don't see why Kohsuke seems so eager to burn all the bridges with Oracle.

I am also part of the "community", using Hudson since before it was so fashionable, and I don't see why should I be so enraged about all this? I guess I am too cynical not to notice that there are two companies involved: CloudBees and Oracle and only one of these two makes money almost exclusively from Hudson-based services. I think there's a natural conflict when for-profit companies make money from open-source software -- they'll always want to keep some proprietary "added value".

What I did understand is that Oracle has some fear of using non-Oracle infrastructure (github, etc) which seems to annoy some of the developers. But, other than that, I don't understand the need to fork the project.

Friday, January 21, 2011

Slow hardware means lost opportunities and developer frustration

At my first job, after a steady pace of desktop applications, I was asked to make a web application.

It was 2006 and JSP was the big game in town (ah, taglets), Struts had just become a toplevel Apache project, JSF was just starting and Spring Framework was the only same-looking thing, at some 1.x version.

Having worked with Java on the desktop, the options were Java on the server side too or learning something new (I ruled out PHP quite early).

Thus it came to choose between something that seemed very flashy, called OpenLaszlo or a servlet-based Java solution, with PostgreSQL as the database (I didn't like MySQL either).

OpenLaszlo was quite interesting because it compiled to Flash so you could create quite beautiful pages. It would have also mapped quite nicely with the application as we required some rather custom stuff. Also, the site was used by management so charts and other flashy, interactive content would have been welcome.

In the end I picked a Java servlet solution, using Spring framework and a whole lot of the solution was custom-made.

The reason ? The laptop I had back then could barely run OpenLaszlo locally!

It was some clunky Compaq with 1GB of RAM (I got 2GB at some point) so it could barely keep up with a normal XP/Eclipse/Browser/email configuration, let alone run my local database and local OpenLaszlo server.

Of course, that might also have been a blessing in disguise, because who knows how easy it would have been to actually implement everything with OpenLaszlo? But, the problem is, we'll never know.

Ever since, I consider that developers need to have access to good machines. Software development is a hard job as it is, you don't need to fight your machine too.

Sure, some low-level computer might be used for testing purposes, but expecting the programmer to use a slow machine just because that's what the users have (or because there is no budget) is the seed to a lot of lost opportunities and developer frustration.

Thursday, January 20, 2011

The 'miserable programmer paradox' isn't about technology

I've read a blog today which states that there is a miserable programmer paradox:

A good programmer will spend most of his time doing work that he hates, using tools and technologies that he also hates.

And the conclusion seems to be that it's all about the technologies that the programmer is using:

The bad technologies take big chunks of time and concentration. The good technologies take little time and concentration. The programmer has a fixed amount of time and concentration that he can give every day. He must give a bigger piece of the pie to the bad technologies, simply because they require more. In other words, he ends up spending most of his days working with tools and technologies that he hates. Therefore, the good programmer is made miserable.

This conclusion seems flawed because it assumes the programmers have no desires and preferences.

The conclusion reminded me of this interview called Edsger Dijkstra -- Discipline in Thought.

In this video, at 11:48 we have a very interesting quote:

And the programmers weren't interested [in faultless programs] because they derived their intellectual excitement from the fact that they didn't quite know what they were doing. They felt that if you knew precisely what you were doing and didn't run risks, it was a boring job.

I think this is a much better explanation for the miserable "good" programmer paradox (as defined above).

The good programmer is miserable because he doesn't get to use the shiny tools and technologies and because he feels bored by the fact there is nothing new.

But he is also good because he knows exactly what he is supposed to do.

Technologies and tools might have a part, but I think the humans in the equation are much more important to look at when searching for answers.

Tuesday, January 18, 2011

iPhone app which automatically rejects hidden or blocked callers

June 2nd 2013 update: The Do Not Disturb feature from iOS 6 provides something similar. It doesn't allow users to filter per call type but it does allow a "quiet" schedule which is very useful.

noblocked is an iPhone 3.1.3 app which automatically rejects incoming calls from callers that are hidden (or shown as blocked by the iPhone).

All the other calls aren't affected.

This app is not a generic whitelist / blacklist caller app, although it could be easily extended to do that too.

I wrote the app for personal use but I'm releasing its source code so that other people may learn from it -- I know it took me a while to get used to notions like private frameworks, etc. since I've always used only the official, documented, APIs.

Obviously, I'm not going to submit this app to the AppStore.

A precompiled binary is available in the downloads section but I haven't actually tested it, it's just an unsigned build I've uploaded. Clarifications or some testing would be nice.

Saturday, January 15, 2011

Go read org.openide.modules.PatchedPublic for some binary backwards compatibility magic

The @PatchedPublic annotation was a nice surprise. Let's see:


/**
 * Marks a method or constructor as being intended to be public.
 * Even though it is private in source code, when the class is loaded
 * through the NetBeans module system the access modifier will be set
 * to public instead. Useful for retaining binary compatibility while
 * forcing recompilations to upgrade and keeping Javadoc clean.
 */
@Retention(RetentionPolicy.CLASS)
@Target({ElementType.METHOD, ElementType.CONSTRUCTOR})
public @interface PatchedPublic {}


This is a really brilliant way to force your codebase migrate to the latest APIs while maintaining binary backwards compatibility.

The idea is that you have an old method that you want to deprecate and eventually remove:

@Deprecated
public void doSomething();

The nice thing about @Deprecated is that it's a flag that signals to the API users they really shouldn't use it, but it doesn't do much else: new code might still be written against the method and your existing codebase might still use it.

So you replace it with

@PatchedPublic

private void doSomething();


Since the method is now private, all your local code that uses the old public method will fail at compile-time and will be forced to migrate to the proper APIs. That takes care of one thing.

New API users will also be unable to use it since it's private: they'll never learn about it! The bad thing about @Deprecated is that while it gives a signal to previous users that they should migrate to something else (which is nice), it also gives a signal to new users that they might do their task using some other method -- which is really bad, as they might just disregard the deprecated warning and still use it.

So the only issue now is how do you keep running the previous binary code?

The reasoning here is that if you still have humans compiling code, they will kinda be forced to migrate to the new APIs due to the compile-time errors. So source-code compatibility might not be as important for this given method.

But a big problem is when you have binary code executing -- that code will fail at runtime!

The trick is to leverage the fact that NetBeans has its own module system that does class loading and patch that class at runtime. (This is probably something they couldn't have implemented if they depended on some external OSGi container).

Since @PatchedPublic has a CLASS retention policy, it will be part of the bytecode and thus accessible by the module system. Thus, using probably some bytecode engineering, the method will be patched at runtime to become public and the old binary code will execute just nicely.

I'm usually not a big fan of annotations (I find most of the new NetBeans annotations that replace the layer.xml as more confusing instead of simplifying) but this annotation was a nice find.

But how did I find this annotation?

Well, I'm one of those guys that still has source code compiling against methods that used to be public and I'm now trying to figure out how to update my code :-) A bitter-sweet finding.

Thursday, January 06, 2011

Browser based IDE

I'm spending a lot of time using the web browser, either to read my emails, check my build status on Hudson or see the latest changes on BitBucket.

Most of the stuff I produce is destined to live on some server: emails, blog posts, wiki pages, issue tracker comments, source code changesets, build tags and even the builds themselves. This means that, most of the time, the local data is just a temporary cache until I do my task.

But modern day web applications should provide just this: a way to do you task using a local cache and then publish it to some remote server.

The IDE is a prime candidate for a serious web application: your projects are always in some version control system, developers really care about their IDE configuration and the server could really help with the workflow and build times.


Your projects are always in some version control system

The local cache is just a matter of convenience, what you really care about are your local source code changes which become your changeset.

Losing the local cache shouldn't be a problem other than the inconvenience of waiting for a re-download. Treating the local source code tree as something transient will encourage better practices like simpler workspace configurations.


Developers really care about their IDE configuration

Installing the IDE on a new machine means spending time re-adding your preferred tab-size, formatting options and so on. A web app will just store that in your user preferences.

The server is very good with caching and indexing


There's nothing more annoying than noticing how much time the IDE spends indexing or processing very popular libraries like Apache Commons.

Imagine how much CPU has been used world-wide indexing the same library over and over just so you could see some methods in a code completion popup!

All that wasted developer time might have been replaced by having the server index and cache a given library version and then just download (part of) that index when needed in the browser.

The server is very good with large builds


If the backend server if powerful, you could offload large builds to the server too. An artifact might take a lot of time to compile on your local machine, but it might run a whole lot faster on a powerful server or distributed on some build cluster.

If the time it takes to build on the server plus download the artifacts on your machine is much slower than just compiling locally, why not do it ?

Plus, you could even share the build artifacts with your team! Using some server-side approach, you could just ask the IDE post the build on the server and share the artifacts among the team (yes, I know about Maven repositories).

Web developers' dream

But I'm writing this from the perspective of somebody that does desktop applications (NetBeans Platform based, actually). What if you are writing a web app ?

Well, after you press "deploy" you just let the IDE upload your app to the test server and just open another browser tab.

Or, you press "deploy" which commits your changeset to the IDE backend server that saves it into your local history then uploads the new app to the test server. Pretty soon all the hard work happens behind the scenes and you are free to work on huge builds using really low powered netbooks.

And if you only upload / download changesets, you just might be able to work over dial-up!

Thin, Thick, Remote client ?


Of course, there are a few moments when developers are offline or when the bandwidth isn't abundant, so I don't view this browser based IDE as a remote client or a thin client that sends everything to the server.

I see it more of a thick client -- it's just as usable offline but much faster and convenient online when offloading work to the server too.

Some of the features listed above would only work if the server does your builds and has access to your source code. So they wouldn't work if you are just using the IDE to work on private projects, that remain on the local machine and never touch the IDE's backend servers.

To be continued...


This is just a first blurb about how I imagine a browser based IDE and I'm looking forward on seing it happen either as some Javascript thing or as some NetBeans fork running as a super-applet with local permissions.

Google might be serious about Chrome OS and their Cr48 laptop, but they don't mean business until I'm able to develop on one too.

This might also be the end of the IDE acronym because I'm not talking about an integrated development environment but a distributed development environment.

The Trouble with Harry time loop

I saw The Trouble with Harry (1955) a while back and it didn't have a big impression on me. But recently I rewatched it and was amazed a...