Monday, February 02, 2009
iPhone Location Manager taking forever
It does have some issues though: CLLocationManger doesn't work if it's called from another thread !
I first noticed something was really funny when my delegate wasn't being called at all.
Neither – locationManager:didUpdateToLocation:fromLocation: nor – locationManager:didFailWithError: was called and my application was just waiting there forever for some GPS information.
My first thought was that it was some issue with my memory management as I wasn't holding a reference to the location manager in any class, just in the method where it was created. But still, it didn't work.
Then, I though it was a problem about the threading model being used (I waited for the GPS location in another thread in order not to block the GUI). Sure enough, that seemed to be the problem, and at least another person complained about it. Not sure it is a matter of threading or a matter of memory pool being used.
But more to the point, always create your CLLocationManager instance in the main thread, and not in another thread. Having a singleton method there which is called from the main thread somewhere assures you that the location manager is created in the proper thread/pool.
Friday, January 30, 2009
Developer surprise on OSX
This worked initially, but after an update, Address Book got confused and entered into an infinite cycle (it was probably trying to ignore the cards in the rule itself and then went on to resolve that recursively).
Anyhow, the good thing was the application crashed only if I scrolled on top of that particular rule. And since I had quite a lot, I was safe to open the application at least.
But, still, having a semi-buggy application isn't fun to use. So I went and looked at the Address Book file format which seemed to be some sqlite3 database, but I couldn't fix the problem from there.
To my surprise Apple has a public API for the Address Book !
So I wrote these short lines of code:
ABAddressBook *AB = [ABAddressBook sharedAddressBook];and that was it ! No more Address Book crashes ! Turns out OSX is really nice to tweak if you are willing to code a bit.
NSArray * groups = [AB groups];
for(int i=0;i<[groups count];i++){
ABGroup * group = [groups objectAtIndex:i];
NSString * name = [group valueForProperty: kABGroupNameProperty];
if([@"BadBadRule" compare:name]==NSOrderedSame){
[AB removeRecord:group];
[AB save];
}
}
Wednesday, January 14, 2009
My Slicehost / VPS analysis
Starting a few months back, I have a VPS from Slicehost. It's the cheapes one they've got, with only 256MB RAM.
I never worked on a VPS, I only had either dedicated physical servers in the company datacenter (at the previous job) or CPanel-based hosted accounts (for some other clients).
All in all, a VPS is just as one might expect: almost like a normal server only slower.
And the slowness is starting to bug me a bit, specifically the problem that I don't know how slow is it supposed to be.
The fixed technical details from Slicehost is that you'll have 256MB RAM, 10GB or disk storage and 100GB bandwidth.
Now there are 2 issues here. One which seems quite obvious and another one I'll introduce later.
CPU
OK, the 1st problem is that you don't know how much CPU cycles you are going to get. Being a VPS means it runs on some beefy server (Slicehost says it's a Quad core server with 16GB of RAM).
According to Slicehost's FAQ:
Each Slice is assigned a fixed weight based on the memory size (256, 512 and 1024 megabytes). So a 1024 has 4x the cycles as a 256 under load. However, if there are free cycles on a machine, all Slices can consume CPU time.
This basically means that under load, each slices gets CPU cycles depending on the RAM it has (ie. price you pay). A 256MB slice gets 1 cycle, the 512MB slice gets 2 cycles, 1GB slice gets 4 cycles and so on.
The problem here is of course, that one is not certain that they only have on the server a maximum amount of slices, but Slicehost is clearly overselling as top usually displays a "Steal time" of around 20%.
So, assuming a machine is filled 100% with slices and there is no multiplexing, it means that a 256MB slice gets 6.25% of a single CPU under load.
6.25 isn't much at all, but considering that the machine isn't always under load, the slice seems to get a decent amount of CPU nonetheless.
If we consider the overselling issue and that 20% is stolen by Xen to give to other VPS, we get to an even 5 %.
Now, this might not be as bad as it sounds CPU-wise as I've noticed Xen stealing time when my CPU-share is basically idle anyhow so maybe it doesn't affect my overall performance.
For example: ./pi_css5 1048576 takes about 10 seconds which is more than decent.
The bigger problem with VPS seems the be the fact that hard drives aren't nearly as fast as RAM. And when you have a lot of processes competing for the same disk, it's bound to be slow.
What Slicehost doesn't mention is if the "fixed weight" sharing rule they use for CPU cycles applies to disk access too. My impression is that it is.
After trying to use my VPS as a build server I've noticed it grind to a halt.
top shows something like this:
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 62.2%id, 20.9%wa, 0.0%hi, 0.0%si, 16.9%st
but the load average for a small build is something like
and it easily goes to 3, 4 and even 9! when I also try to do something else there.
load average: 1.73, 2.06, 1.93
So, looking at the information above, we can note that 62.2%, the CPU is just idle, while the actualy "working" tasks, ie. 20.9% are waiting for IO. The rest of 16.9% CPU time is stolen by Xen and given to other virtual machines, and I don't think it really matters given that the load is clearly IO-bound.
And here lies the problem: just how fast might Slicehosts' hard drives be ? And how many per slice ? Actually more like: how many slices per drive ?
From a simple test I made, a simple build that takes 30 seconds on my MacBook Pro (2.4Ghz/2GB ram/laptop hard drive-5400rpm) takes about 20 minutes on the slice. This means the VPS is 40 times slower when doing IO-bound tasks.
Another large build that takes around 40 minutes on my laptop took 28 hours on the server. Which respects the about 40 times slower rule.
Now considering the above number and a 20% steal time, I'd expect to have a 20% overselling of slices on a physical machine. Meaning, at 16GB per machine, roughly 76 slices of 256MB on one machines. Taking into account the 1:40 rule above for IO speed, this means that they have about 2 hard drives in a server.
Conclusions
It's certainly liberating to have complete control over a server. CPanel solution just don't cut it when you need to run various applications on strange ports. Of course, the downsize is that you also have to do all the administration tasks, secure it, etc.
The Slice host services are very decent price-wise, the "administrator" panel they have provides you with everything you need, even a virtual terminal that goes to tty1 of the machine (very handy if for some reason SSH doesn't work for example).
Even the smallest slice I'm using right now has enough space, RAM and bandwidth for small tasks. If you just use it sparingly during business hours, the "fixed weight" sharing rule gives you enough CPU / IO for most tasks.
But for heavy usage, I think the solution is either to get a more expensive slice or start building your own machine.
IO-bound tasks are almost impossible to run due to the 1:40 slowness noticed. This means that you need to get at least the 4GB slice to have it run decently. Of course, that's $250 compared to the $20 slice I have right now.
CPU doesn't seem to be a problem, at least for my kind of usage. It seems responsive enough during normal load and mostly idle under heavy load (so idle that Xen gives my CPU cycles to other virtual machines). Initially I was expecting this to be a major problem while moving my build server there, but boy, was I wrong. IO-limitations don't even compare with the CPU limitations.
Getting 5% or more of a fast CPU doesn't even compare to getting 2.5% of an even slower resource like that hard drive if you are compiling.
Further experiments
During the time I was considering the CPU to be my future bottleneck, I was thinking which option would be better: 2 x 256MB slices or a bigger 512MB slice.
According to their rules and offering, the two configurations are totally comparable. Even more, using their sharing rule, 2 x 256MB slices should get at least the same CPU cycles under load as the 512MB one. (Further emails from Slicehost's support led me to believe the rule might be oversimplified, but they didn't tell me in what way -- I think the weight of the smallest slices might be even smaller with the bigger slices getting even more of their share).
So, if under load they get the same CPU cycles, it means that when the machine has CPU cycles to spare, I have 2 candidate slices to get those spares.
So the question was: for a 5% price increase I would pay for 2 x 256 slices compared to 1 x 512 slice, will I get at least 5% more CPU cycles ?
I'm still not certain with the data I've computed that it might happen. Also, the new question now would be: will I get at least 5% more IO operations ?
Non-agression
The above post isn't a rant against Slicehost. I think they are providing a decent service for their price. It is interesting though to see which kind of usage can one put on a VPS and which are better to be run on the server in the basement.
512MB update
Well, isn't this interesting. A vertical upgrade to 512MB of RAM is another world entirely. Maybe the new VPS is on a less-loaded machine, but at first sight, it's looking way better: the previous 28 hours build (fresh) takes now only 40 minutes for a small update. I'll try a clean build later this week and see how fast it is.
So it seems it wasn't only a problem of slow IO, it was also a big problem of not enough RAM leading to swap file trashing.
Friday, November 07, 2008
I guess it has begun: the environment is at fault for everything
Take for example my main bank BRD - Groupe Société Générale. Yes, it's the same Groupe Société Générale which showed at the end of 2007 a € 4.9 billion fraud. But it's OK since the Romanian branch is really profitable for them due to limited consumer education here and powerless consumer protection institutions.
I just noticed a new message from them on the Internet Banking site: due to increased environmental awareness from the Bank, they are encouraging people to get alternative bank account statements via online banking or by post. Otherwise you are entitled to one printed account statement per month from their offices.
The reason is, of course, to save the trees by printing less. Of course they are willing to print tons of the stuff if you are willing to pay -- which will go directly into their profit but that's another problem, no ?
Also add here that they also increased the tax for having an account by 20% for individuals and 50% for companies. That probably also had some environmental reasoning that's escaping me.
Anyhow, I'm looking forward to more price increases and consumer ripoffs that's going to be done in the name of the trees.
Too bad us people can't buy our own carbon credit so that companies won't be able to offset that extra cost in the name of the environment on us. But you know what ? I'm pretty sure some one will introduce carbon credit for the masses. After all, why not ? It's a nice way to bring some more money to the state budget.
And only then that old saying will come true: they'll tax you for the air you breath !
Well, technically for the air you exhale but we're close enough.
Thursday, August 07, 2008
No new mail! Want to read updates from your favorite sites?
When I get a new email, it sits in the Inbox until it is resolved (ie. I reply or read it). Then it's instantly archived. Out of sight, out of mind.
I've found that this technique greatly reduces the information overload coming from emails. With a full inbox that was also showing snippets of the message (ie. small previews), every time I looked at my inbox I had some information to process. Like: oh, look, that one is starred, I wonder when they'll reply or hm, it's been quite some time since I've got an email from X as the name is on the bottom on the inbox, etc. etc.
Basically a full inbox sends you some information even when no unread emails exist. It's also quite a bad way to "search" for email. I used to manually look for some subject and/or sender in order to hit reply. Now I just use GMail's search.
I remember about some TED video where the host said something like our brain likes new information, we have an addiction for new stuff. Which is exactly what email feeds. It feeds our addiction for new things, even by just having a full list of previously received emails. I also assume that's why sites like Slashdot, Digg and Reddit are quite popular: they feed us new, easy to process, information. Imagine brain junk-food if you will or the Internet-equivalent of too much TV will rot your brain.
Related to this need to always get new stuff, I find it interesting the way Google handles this. When your inbox is empty, you get this message: No new mail! Want to read updates from your favorite sites? Try Google Reader (with a link to google reader).
So what Google is doing here is proving us what we have become used to. Not enough interruptions, not enough new stuff from email? Why gee, why don't you try this other source of new things: Google Reader. Come on, get a quick fix !
Thursday, July 17, 2008
Oh, my, how the NetBeans community has grown !
Just now openide@ has 2000 unread messages, the oldest unread being from 26 November 2006 about the Manifest File Syntax tutorial (boy, a lot have changed in the Editor APIs). nbdev@ also has about 1700 unread but that's ok as I rarely post / answer there.
Now, this trend seems to be caused by two reasons: me being busy (and lately I'm working full-time on getting the Editor APIs usable in a standalone way) and the community growing.
I do remember the time when I had zero! unread messages. Now I hardly notice when another hundred adds-up.
So, how do you guys handle the workload ?
Of course, the solution might be to be a little more methodical about it and dedicate some exact time (like 30 minutes / day) but it just doesn't seem to work with me. Must be the 100 Editor modules I have open right now in the IDE -- sigh...
Sunday, July 06, 2008
I'm not sure I like Web 2.0
Sure, an URL could be actually a script behind that allows for a more dynamic page.
But when the script is used to discriminate users for a supposedly free site like YouTube, I'm getting kinda annoyed.
This video is not available in your country ?
Un-believable.
So Web 2.0, besides all the AJAX thingy, also brings a wide-spread encouragement to use a proxy to hide your identity ? Is this a social construct to teach us about security and privacy ? Or just a degeneration of what the Internet was supposed to be ?
Thursday, May 22, 2008
I forgot what Alt + F4 does
Now, it was clearly some funny-man trick but then it occurred to me that I'm not certain what Alt+F4 does.
I've been using OSX for so long I had forgotten about Alt+F4 on Windows. I guess this is enough Microsoft-independence.
I never wanted to program on Windows or using anything Microsoft specific. This lead me to Java initially then Java on Linux and soon to Java on OSX...
Monday, April 21, 2008
Should the language shape the mind ?
There is an often quoted phrase in the programmers circles:
When the only tool you have is a hammer, everything looks like a nail,which basically states the same thing: the languages we programmers know and use influence the way we perceive reality.
That's a dangerous thing, because languages (programming languages) were only supposed to help up interpret information. Skewing our perception means we obviously don't even notice the wrong path we've taken.
Being multi-cultural -- that is, knowing multiple languages -- helps, as these may overlap and give you various perspectives on the information and thus make a better representation of the information. The end solution is also most likely to be better.
But I often wondered: shouldn't we at some point just stop trying to force our thoughts into some language and just start expression into another language altogether ? Into our language ?
Sure, learning a language might bring some "discipline" into minds; using it might help up programmers get along with each-other. But in the end a language should just give a programmer some new perspectives. The output should still be in our language.
I assume this is the reason most people think everybody else's code is shit: their internal interpretation doesn't map with the mapping inside the other programmer's brain. A younger you produced a lot of bad code by your current mapping.
Which basically means we are utterly unable to find a way to fully express our thoughts in a way other people would understand, agree and like. And by like I mean having a close mapping with the other (or just bring something totally fresh).
And this limitation doesn't just apply in relation with others but to ourselves.
Then, how should we function ? Each new problems brings into us a current solution with our present interpretation. Should we express this into something like a Domain Specific Language (DSL) ? Should each new problem be represented into a new Problem Specific Language ? (And sure, maybe PSLs at a given time might have something in common as they also represent us).
So what means that some code looks like shit then ? It means the chosen PSL is incomplete, somehow flawed or just not elegant enough compared with our current PSL. Rarely does code look like shit if we like the PSL and the solution is somewhat broken -- then it just has bugs or is incomplete, but we fix it while following the given PSL.
Won't this make cooperation really hard ? Well, not really as cooperation might change the PSL for the better. It will also force programmers to slow down a bit and try to first understand the PSL before understanding the solution. We do this anyway as even while using a common language there is always a meta-layer programmer-specific; only that this layer is sometimes obfuscated by the language used instead of being very prominent like in a PSL.
Maybe general purpose programming languages should stop existing and be replaced only by programming paradigms and concepts.
Tuesday, January 22, 2008
NetBeans Platform autoupdate via BitTorrent
And the fist pet peewee I had is that fact that the Update Centers are such, big, centralized, monolithic blocks.
I always assumed that I would need to hack the AutoUpdate module from the NetBeans Platform quite hard in order to get what I wanted all along: BitTorrent downloads for new or updated modules.
So, the first thing we have to notice is that the AutoUpdate Catalog (see DTD) provides for each module in the Update Center a location called distribution. The distribution may be a relative path to the catalog location. That is usually something like ./com-example-mymodule.nbm or it can be a totally new URL.
Now we have a first step towards splitting traffic: we can put the actual NBM file on another URL altogether. Or, if we have the AutoUpdate Catalog location sit behind a servlet we could even try a bit of balancing and return a different distribution link depending on how loaded are the servers. That's a plus...
Ok, that's something but it isn't BitTorrent, you still download the whole file from a single place.
It's alive !
But what the Platform does offer is the possibility to register in the Lookup your own URLStreamHandlerFactory . So, I can register a new handler for the torent:// protocol and the AutoUpdate module will just use my stream handler.
And thus, a few hours later I have a working AutoUpdate infrastructure via BitTorrent. My StramHandler downloads behind the scenes with BitTorrent using Snark and provides a nice InputStream to the AutoUpdate module. It's still not polished but already useable. Install the module from this update center: http://emilianbold.ro/modules-updates.xml or just grab the NBM.
Something else: the destination of the module is no longer pointing now to the NBM, but to a torrent file actually which has the NBM file.
The steps are: place the torrent file on http://example.com/module1.torrent.nbm, edit the catalog to have destination="torrent://example.com/module1.torrent.nbm" and you're good to go. Behind the scenes I'll actually download the http file and then download the torrent.
<!DOCTYPE module_updates PUBLIC "-//NetBeans//DTD Autoupdate Catalog 2.3//EN"A small remark: note the .nbm extension. It's something the AutoUpdate module needs otherwise it won't be able to install the file as NBM (could be a bug, I'll report it at some point).
"http://www.netbeans.org/dtds/autoupdate-catalog-2_3.dtd">
<module_updates timestamp="35/26/18/21/01/2008">
<module codenamebase="org.yourorghere.emptymodule"
distribution="torrent://example.com:6881/cc032d0c003b12568c91a0339f88301fa6ca67f5.torrent.nbm"
... >
...
The module still needs some extensive testing and different BitTorrent libraries (I'm using Snark, but I would like to have the Azureus core as a different provider in the Lookup maybe) but it does show it is possible.
Using the same technique one could write multiple backend/"protocols" for the AutoUpdate. Drop me a message if you want to know more or want to help me (source code will be online soon).
Thursday, December 20, 2007
The opensource bureaucracy
But one may be shocked to notice the kind of bureaucracy open source brings. In a normal "distributed" project where you don't have a sugar-daddy to pay for all the project hosting and other expenses, you need to get some free hosting.
This is the first place where you need to get an approval for your hosting, depending on what you do (you can't expect to have any project approved) or what license you use (you get the free hosting if you give your work under their preferred terms).
And the more "free" stuff you need (like build servers, wikis, email lists) the more you have to wait, accept rules and abide by them. But generally, wait and read a lot of strange disclaimers and terms and conditions.
Don't get me started when you get to the licensing part. Do you want your code into some high-profile codebase ? You need to sign the agreement, which needs to be scanned and emailed or even better faxed. Then you need to wait for the acknowledgment that the fax did arrive and someone is going to give you commit access, in a few days.
Basically the more people you involve the more it takes to do anything, especially since you depend on their goodwill. The more "steps" you have to follow, the more agreements you have to approve of, the more time you have to wait.
I'm waiting for a month now for some approval on a high-rated open-source nexus. I'm not being denied, I'm just waiting for someone to finally get to my item in the todo list.
It almost makes renting my own server seem like a good expense.
Wednesday, December 05, 2007
Thursday, November 15, 2007
I really like Java's tooling
Really, Java has great developer tools.
Long ago I liked to experiment with a lot of different programming languages (I own lisp.ro). Many languages got things right from the beginning while Java with the C++ inheritance is really, really verbose. Nowadays I mostly play with Python, Javascript (and studying Erlang).
But what Java lacks in succinctness compensates in tools. Big, juicy, gooey tools.
First, a bow to the JVM. It's such a nice feeling to develop on OSX and only test rarely on Windows and have everything work !
Second, I really like my NetBeans IDE with my debugger and trusty profiler. Problem with the EJB: bam! add --debug to Glassfish and connect from the IDE. Possible performance problems? kpow! attach the profiler to the application and see what's the problem.
Wanna see the health of your code: put a whole bunch of reports in maven and build your site (findbugs, pmd, taglist, checkstyle, all good stuff).
And if you feel in a coding mood, why don't you add a MBean to get quick info from jconsole, even remotely? Or even better, make a custom JMX client using JFreeChart to get a nice display of the health of the application.
It just feels like software engineering. And it's nice.
Tuesday, October 30, 2007
Java 6 meet Linux on MacBook Pro
I mean, it's not like it's been almost 11 months since SUN released Java 6 on Linux, Solaris and Windows. And I bet it's more than an year since Apple started getting code for Java 6 from SUN in order to customize it in due time.
But I guess the iPhone and transparent menu is far too important to actually put some developers work on Java.
It's bad enough that they are supposed to force down on my throat a new OS release for a new JDK version -- I just want Java 6 on Tiger thank you very much. But now, they are delaying even that !
So, I'm thinking that in the future I'll probably go back to Linux and stay there. I thought proprietary Microsoft software is bad; well, I'm starting to believe maybe proprietary Apple software is just as bad (only prettier).
Hence, the first step is to get Linux working and see the hardware support. Because if it's not good enough I might just go for a Thinkpad.
What about that Apple ?
Monday, October 22, 2007
Coupon or negative numbers' marketing
So, you want a new car ? Well, our car is the best: it's 1000 euro "buyer (bar)gain".
Want to be subscribed to our useless service ? How could you refuse: the first month is free!
Why don't you migrate to the new version: it's 10% faster !
Want to take a 20 year long mortgage for 100K euro ? We are the best: we give you 4K euro for free.
As you noticed, all these advertisements avoid the real issue. They avoid, the actual price (or actual speed, actual time to completion, etc).
All they brag about is that you get this discount or that super-offer. But they don't even bother to tell you the actual cost anymore.
I mean, in their mind, getting anything for free should be reason enough for people to buy their product. Makes sense to me... NOT.
My opinion is that this coupon advertisement is trying very hard to confuse the buyer. Because if anyone uses the same unit for their product like price, it's easy to compare products.
But how hard is it to compare an offer where I get 2 free months with one that gives me a free (cheap) cell-phone or another one where I may have already won an all-expense paid trip to the Bahamas. See ? It's almost impossible.
So, please, marketing gurus, stop telling me things I don't care. Tell me things I can quantify: if I get a discount, how much will it still cost ?; if your product is faster than the old one, how fast is it actually (maybe the old product ran on Cyrix processors).
The exception is when I already am a customer so I do care what I have to gain. 10% speed: sure ! Half memory usage: excellent ! Less CPU usage: even better.
Thursday, October 18, 2007
New NetBeans Platform-based tool
First, by the screenshots alone it was clear to me it's a NetBeans Platform application. Also, the charts look awfully like the NetBeans Profiler ones. Well, lo an behold, it is a simple Platform application holding the profiler cluster.
What's annoying me is that the profiling part only works with Java 6 (not available on OSX). But the NetBeans Profiler does work with Java 5 if we just configure the proper agent. I would have been nice for VisualVM to also use the agent as not everyone is using Java 6.
Second, the OSX integration is less than stellar (it's the 1st release so I'll excuse them). The menu doesn't show up on the apple menubar the 1st time you run the tool (but on subsequent runs it does, strangely). Also, no launcher like in the IDE.
Oh, forgot to mention it uses the NetBeans Platform from NetBeans 6. Looking good guys.
Thursday, October 04, 2007
(Maven) Building to a ramdrive
What's annoying me with the build system is that it usually writes a lot to the disk. Not only is that quite unnecessary (as the files will be overridden in no time) but the laptop hard-disk is also quite slow and all this writing is trashing it.
The solution: write to a ramdrive ! As you all know, a ramdrive is a "virtual" hard-disk that sits in your RAM and goes away when you shutdown the machine (not that I care, the build files are temporary).
First step: create the ramdisk
There are some utilities that do this, but it's quite doable from the terminal (eventually with a shellscript).
- First, get your disksize in MB and multiply it by 2048. So 256MB means 524288.
- Next, create the device: hdik -nomount ram://524288 The command will also display the name of the new device file.
- Create a HFS filesystem in there: newfs_hfs -v "disk name"/dev/diskXXX , where diskXXX is whatever the previous command printed
- Mount the filesystem: mkdir /Volumes/diskXXX &&
diskutil mount /dev/diskXXX
At this point you should have a new 256MB drive mounted.
Second step: link maven folders
Now, I have to set the "target" folders on the ramdisk. Normally the orthodox way is to change the pom but I just didn't get it working. So my old-school solution is to use symbolic links.
This could be smarter as a "mvn clean" will remove the links we just create (but just keep a script in place that recreates this).
My script is:
for i in *; do
echo $i;
mkdir -p "$1/$i/target";
ln -s "$1/$i/target" "$i/target";
done
and I run it in the folder that keeps all my Maven projects (a flat hierarchy). Note the $1 which is the argument to the script. I use it like this:
$./linkramdisk.sh /Volumes/diskXXX
Building
All the links are in place so you can try a mvn install and see how fast it works. I my case, I reduced the build time (with no unit-tests) from 27 seconds to 17 seconds.
That doesn't seem much, but it does add up for larger projects and most importantly, it keeps the hard-drive out of the loop.
Oh, did I mention I also use FileVault on my account ? That's another reason one would like to avoid writing to disk: no need to encrypt something useless like a build artifact...
Saturday, September 08, 2007
Power-efficient CPU a non-issue ?
My point is: who cares about that ? I want my CPU to be fast first, eventually have multiple cores and some fast way to talk with my memory. It would be nice to also consume little power, but that's a nice touch so to speak.
I assume 90% of the CPU buyers don't have server farms to worry about their electrical bill so why induce this trend ?
I think the solutions is clear: Intel / AMD cannot increase speed easily anymore. Therefore they are convincing consumers that this is what's important about a CPU: power consumption. The result: you see all kinds of uninformed users wondering how much the CPU consumes as if they would see the difference.
I don't want my CPU to consume less than my graphics card or my hard-drive. I'm buying it to work so I expect it to take some power. I would gladly take the power-hungry fast CPU than the low-consuming slow CPU.
As far as I know the operating system or any other software in this world isn't influenced by how much power the CPU takes, but it sure matters if the CPU is faster or has more cores (or some multi-threading per CPU).
So, congratulations to the marketing departments of Intel for convincing people that it's not speed or threads that matter (you know, the stuff people need) but power consumption.
Probably the PC already consumes less than my fridge, old TV or hair drier, but God-forbid it consumes more and gains some speed. No way, we have to be power efficient :-)
Tuesday, September 04, 2007
I saw Vista for the first time yesterday
Anyhow, since I'm the computer guy in the building, they came to me to get it started. First, because they had no sound. Also, because they have a speaker system with 5 or so satellites and only a 2 way sound card.
And this is how I saw Windows Vista for the 1st time. I was really weird as I haven't used any Windows in a while but Vista made me even more uneasy as I just couldn't find the settings I was used to in the XP-using days.
So, fast-forward 1 hours or so after I've installed the sound drivers. The strange thing was that the system was rather slow and unresponsive, especially while installing stuff. I mean, this is better than any machine I own so I expected it to fly. But no, it was just about as slow as my iBook G4.
About those accept/deny dialogs I've read about on the internet, I have to say I only noticed them a few times. I guess if you let Vista be, it doesn't bother you with messages :-)
But the thing this experience showed me is just how foreign Windows looked to me. Mostly due to Vista but the point was that Windows in general is something I don't know much about anymore.
I've used so much other operating systems that I've become more or less a Windows beginner.
I think that generally there isn't a need for Microsoft anymore. Windows is redundant with OSX / Ubuntu, MS Office easily replaced by OpenOffice, Explorer by Firefox.
They don't actually seem to have any product anymore I would need. And this is a nice feeling.
Friday, July 13, 2007
Maven projects with NetBeans IDE (Part 1)
(This article is on google docs too).
Introduction
I'll present here how to use Maven projects with NetBeans IDE via the MevenIDE project. Since the emphasis is on Maven and Maven-support, I'll just play the devil's advocate and try to focus on most of Maven's features while also underlining ant's flaws.
As always, the answer is somewhere in the middle. There are cases where ant is preferred to Maven and some when it's the other way around.
The build system
All NetBeans projects are ant-based. Ant is a very useful tool due to the cross-platform and usually easy to write build-scripts. But these scripts to compile, generate, build and deploy the projects usually endup being quite complex.
They are so complex sometimes that they represent basically another part of the project. They are also non-standard for each project (especially legacy free-form projects).
Now, using the IDE, we have some pre-cooked scripts and special ant-tasks. But your build-system is rather tied now to the IDE (not actually to the IDE, but to those ant-tasks). Everything is open-source, true, but you can't actually just take the official ant distribution and build your system... You need to tweak it a little. (In NetBeans' defense, at least you have a ready-made ant build system. With other IDEs, you just have some metadata but nothing usable outside the IDE.)
Is it a bird, is it a plane ? It's Maven
Maven isn't an ant replacement. It includes a build-system component, but it's something else. It provides a consistent view on any project with a standardized project definition and plugins for reports and website generation. Think of it as a general project management tool that includes a build tool.
Basically with Maven you don't write any scripts ! Maven provides you a standardized way to structure your project and you just configure various build / reports / site plugins. Therefore you don't have any build scripts anymore -- you have a declarative description of your project, called POM (Project Object Model).
Maven also introduces the concept of repository. That is, a place where all the build artifacts (the resulting JARs or WAR/EAR files) are saved and retrieved from. For example it's easy to imagine a setup where the whole team uses a read-only repository with the 3rd party artifacts. Thus, they don't need to sit in the VCS !
I really can't sum it up better than that so I recommend using the Maven website to learn some more.
MevenIDE
The Maven-NetBeans IDE integration is provided by (part of) the MevenIDE project (install it from here).
All my examples are based on NetBeans 5.5. Sadly, with the NetBeans 6.0 release approaching, most of the Maven-integration work is being put there. That is, for the NetBeans 5.5 version MevenIDE is somewhat beta quality and you're supposed to have more luck with NetBeans 6.0 (now at milestone 10).
You might stumble onto some bugs for the 5.5 integration but it's a very dynamic project with people still working on it and it's quite easy to find workarounds or help in the mailing lists (Milos Kleint is usually there to help).
So, if you don't actually like living on the bleeding-edge by either using the current Maven integration for NetBeans 5.5 or by switching to NetBeans 6.0 M10, you're out of luck. But my advice it to take a chance ! It will be worth it.
Java Application Project
Let's see how you could use a Maven project instead of a normal Java Application Project.
After you created a new Java Application project with the NetBeans wizard, you should have in the Files view something like:
/src
/test
/nbproject
build.xml
manifest.mf
(The Files tab should be next to the Projects tab).
Now we select File->New Project and then Maven2->Archetypes project with the Quickstart Project template. Artifact Id is a public name for your project (for example: maven-javaapp) while Group Id is usually your company (com.example). Please note the “version” parameter. You'll find this useful with dependencies:
Both wizards create some files and some example classes. The NetBeans-default Java Application creates a Main.java file but no test. There are also a lot of semi-exotic files in nbproject/ .
The Maven wizard creates a pom.xml file and a src/ folder. The pom.xml file is the project-metadata for Maven. It basically replaces the build.xml and nbproject/ . It is not the only metadata file Maven uses, but it's the only one that's mandatory (and the one you usually see).
The project name
After you've created the Maven project, you'll see that the name is something like "Maven Quick Start Archetype (jar)". Of course we need to change that to match the default IDE style (which I'm trying to duplicate so far). So, we open the Project POM file in "Project files" and look for the <name> element. Just editing that to "Java Application" and saving the POM should be enough, the Maven Project from the "Projects" tab will refresh and you'll see the new name. Pretty nice, no ?
Please note the extra "Project profiles" file in there. As I told you, there may be some other metadata files, but for most actions, the POM is king.
That was a forced introduction in hand-editing the pom.xml file. You might as well have used the Project properties window to edit the project name. This will automatically change the POM file.
You will notice that the MevenIDE plugin has some pretty nice support for the metadata files. You have auto-completion in the POM file not only for the XML schema but also for plugins from the local repository. There's also hyperlink support (just hold the Control key pressed) that not only opens a browser for real URLs but is able to jump into modules and parent projects (this is a feature of Maven we're not going to talk in this article, but it's good to keep in mind).
Running
Since we already have the Source and Test folder, the main action we'll be doing is running a main-class.
First, if you select Run via Run->Run Main Project menu or the F6 key you'll see this warning message:
It basically tells us the the NetBeans-Maven bridge isn't configured yet, so it doesn't know what to run. As the message says, right-click the project and go to Properties.
We configure the Run->Main Class to something like "com.example.App" (or whatever your package is).
Now, if you open the POM again you'll see that a whole lot of data was automatically added under some <profiles> element.
Again, you select to Run this project (right-click on the Project -> Run or just press F6 if it's the main project). After some messages in the output window, you'll see something like this:
BUILD ERROR (Badly configured, need existing jar at ...)
Huh? What was that ??
Well, as I've said, bugs do exist and you've just stumbled upon MEVENIDE bug 485 . Luckily, there is a workaround: just open the POM and look for
<plugin>
<artifactid>maven-assembly-plugin</artifactid>
then add
<version>2.1</version>
What does this do ? Well, Maven uses various plugins to run everything from building to creating the JARs, to generating reports. Each plugin has a version so we can still have old projects working. Now, where do you expect all these plugins to sit ? Of course -- in the Maven repository.
Whenever a plugin isn't found, Maven tries each repository in order to download it (by default there's only one repository, the official Maven repository).
So, in our case, MevenIDE was using the latest maven-assembly-plugin from the repository which is in beta and has some issues. By forcing the version on 2.1, everything works.
Take note that I've said "download it" above. You should expect something like "Downloading maven-assembly-plugin" with a progress bar in the lower-right corner.
Testing
If you like writing tests (don't you ?), just use right-click on the Project -> Test . You'll see the details in the output window. If everything is ok you should see a BUILD SUCCESSFUL. If some tests fail, you have a BUILD FAILURE and you may click on the failed tests in the output window to see the stacktrace:
MevenIDE also has nice support for the output with marked stacktraces just like the standard projects.
Debugging
In order to debug a project, right-click it in the “Projects” tab and click Debug or just select Run->Debug Main Project (the F5 key). It should look the same as the IDE-project.
Are your tests failing ?
Maven is really in love with tests and these are so important that they break the whole build. That is, you cannot run or debug a project if one of your test fails. This does make sense in theory, but in practice you always have some tests that fail (at least I do ;-) ).
The workaround is to skip tests. From the IDE, you just go to the project properties -> Action Mappings and tick “Skip tests” for the action you want (Run project, Debug project or both).
If you run Maven from the terminal, you must add “-Dmaven.test.skip=true”.
If you feel really in shape, you can configure in your POM the plugin that's in charge for tests and just exclude the tests you are still working on. Just add something like this and you're done:
<project>
<!-- .... -->
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<excludes>
<exclude>**/TestBroken1.java</exclude>
<exclude>**/TestStillFailing2.java</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
<!-- ... -->
</project>
Note the already familiar artifactId and groupId for the plugin configuration.
This is the end of part 1. Stay tuned for the next articles about how to use dependencies, how to configure your own repository, how to make EJB/EAR artifacts and even NetBeans Platform modules.
The Trouble with Harry time loop
I saw The Trouble with Harry (1955) a while back and it didn't have a big impression on me. But recently I rewatched it and was amazed a...
-
As Apache NetBeans became a top level Apache project and finished the incubation process I was asked for an interview and my photo. Only ...
-
People will never bother to do anything manual unless absolutely necessary. This is why I believe the current NetBeans "empty" jav...
-
I like making NetBeans dance: NetBeans - PalmOfMyHand - Dance Demo from Emilian Bold on Vimeo . The above video is a little somethi...