Tuesday, December 13, 2005
These people are likely the very same people who cut you off while crossing 5 lanes of traffic because they are going to miss their exit. They are likely the very same people who use the 10 items or less lane at the grocery store for an entire shopping cart full of food. They are also very likely the same people who keep stealing everyone's pens.
But I bet they are making a killing selling their blog spam software!
Friday, November 25, 2005
But... the files are posted on the web as Microsoft Windows Media Player streams or something like that... which means the media file linked to by the index page is actually a small (5 line) XML file which has HTTP urls to another site, which if you fetch one of those URLs is just a plain text file of 2 lines long... so I can't download the videos onto my laptop, and even if I figure out how to get the actual video file its a rather convoluted process. *sigh*
But it would be nice to download the videos as I'll be away from a network connection all weekend but would still like to have the lectures available to me. So now those of us who don't have network access this weekend won't have access to the lectures, while those of us who can keep their broadband network access (or are physically nearby to campus and can just walk/drive over to campus to review them) have an advantage. *sigh* Thanks Microsoft. I'm sure the guy who thought up this round-about why to describe a video file is quite proud of himself. He probably got a few extra thousand dollars bonus money one year, and the rest of us get to suffer with a web of links which don't work with conventional tools (e.g. Safari, Firefox, curl, wget, ...).
Friday, November 11, 2005
With 6 questions per homework and 5 homeworks in the semester each question is worth 1.16% of my final grade. Remember that since this class is essentially pass/fail with 93% and below considered failing, 1.16% of my final grade is really worth 19.3% of the pass/fail grade. So each homework question is worth 19.3% of whether or not I am a good enough student to work on a PhD in Computer Science.
Remember that questions are randomly selected on the homework. So if I get 5 out of 6 completely correct and the one I got wrong is one of the ones graded, I lose 19.3% of my pass/fail grade. If another student gets only 3 out of 6 completely correct, but was lucky enough to have gotten all 3 of the graded questions right, they lose none of their pass/fail grade.
How exactly is this a fair judgment of the student's ability? For starters we have no idea if the other student (who only got 3 of 6 correct) really did get 3 of 6 correct or got 6 of 6 correct, as 3 of the questions weren't even graded. Secondly we're talking about 1 question being worth 19.3% of the pass/fail grade. If both students scored the same on all other homeworks and exams and both are close to the 94% pass/fail line, it is possible for the one student who got 2 more questions right on a homework assignment to actually fail the class, while another student who got those questions wrong will pass the class without a problem.
The instructor is grading this way to "save work for the TA". Other graduate courses on the same material at other schools require the students to grade their peers; thereby removing the grading load from the TA entirely. If the instructor wants to save work for his TA then maybe he should use such a grading policy. But isn't the point of a TA to help the instructor with work such as grading, not sit around and twiddle his thumbs and get a free ride? I'm also a TA and I don't get a free ride. The TA for this course shouldn't get a free ride either.
So not only am I stuck having to deal with this class, but now I also get to find out on which side of the lucky/unlucky line I reside on. If it is the lucky side I better run to Vegas quick, because I'll make more there in a weekend than I ever could in a lifetime as a PhD. If it is the unlucky side then I have a lot of studying to do this winter so I can pass the oral exam for this qualifier component, as I failed the class. *sigh*
More interestingly, even if I could find and document such a case of one student passing while the other failed, yet the failing student did better overall on the homework than the passing student, I would be unlikely to win a petition on the subject to the graduate curriculum committee, as such a decision could possibly void a number of grades given for this class over a number of years. Talk about tipping over the apple cart.
So the short of it is: Life is a bitch. And any grade earned in this course is pretty much irrelevant, as only 6% of your grade actually matters but 30% of the grade is randomly determined. Uhm yea, I'm motiviated to care about this material.
I might as well just flip a coin to determine if I would make a good PhD student.
*flip* Damn. Tails. I guess its time to pack up, go home, and get a real job.
Thursday, November 3, 2005
The course must be passed with a score of 94 or higher (out of 100) to have it count as having met one of the core requirements for a doctorate degree.
The syllabus of the course specifically states that a score of 90 or higher in the class will be marked as an 'A' (the highest grade one can earn in a course at RPI) to the registrar, and thus on your transcript.
So therefore I can earn a 93 in the class, have an 'A' (remember, this being the highest grade available at RPI) appear on my transcript, BUT I FLUNKED THE REQUIREMENT. No doctorate degree for me. Do not pass go. Do not collect $200. But collect an 'A' on your transcript.
Of course the syllabus also defines scores above 80 and below 90 as a 'B', lower scores as a 'C', etc. So obviously the course is not pass/fail. But it might as well be. And nobody can tell by looking at my transcript whether or not I received a 93 or a 94 in the course. Yet a 93 means I can't get a doctorate while a 94 means I can continue working on one.
Clearly not all values of 'A' are equal, just as not all values of 1 are equal, such as in 1 + 1 = 3. Wow, I guess this professor at RPI just proved that 1 + 1 = 3 since 'A' does not equal passing. Or something like that. Its a little too late in the day to be trying to construct such a proof. But I do think it is horribly unfair.
RPI should just relabel this class as pass/fail for doctorate students. You either earn the grade required to pass the course or you don't; giving out 'A's while failing the student is just plain wrong. Its sort of like firing people while giving them a huge raise at the same time. "Hey Bob! Good news! You are getting a raise of $30,000/year! Oh, and your fired." Uhm, thanks.
What's worse is its not very hard to get a 94 in this class. Make a small mistake on an exam (like forgetting to explicitly return from a subroutine when it seems obvious to you at the time you wrote the pseudo-code that everyone would think it obvious you would return at that point, so you just put a paragraph break and continue on) and you automatically lose a couple of points on your final grade. Make similiar small mistakes on two (of five) homework assignments and you are already looking at a 92 or 93, tops. So at this point in the semester (with 4 weeks left to go) I'm already failing the class, but I'm sure I'll get an 'A' on my transcript. Go RPI!
Sunday, October 2, 2005
So I went with a friend Friday night to see Serenity (aka the Big Damn Movie), even though I came to Firefly late, as I only saw the DVDs this summer. I have had a very mixed reaction to Serenity. On one hand I was very happy I went to go see it, as it was one of the best movies I have seen in a long time. On the other hand some of the plot twists taken by Joss really upset me and some of the charm that the show had just wasn't in the movie... but nothing translates from TV to the big screen without changes.
I'm glad I forked over $9.75 to see the BDM Friday night. It was worth it. And I will definately pick up the DVD when it comes out. Here's hoping that Joss and crew get a chance to make a sequel; assuming they can make it as good as Serenity itself was.
Friday, September 9, 2005
I have been waiting for a keyboard like this for years. Now we finally have one coming to market, hopefully around 2006. I have long thought that a keyboard whose key caps could adapt to whatever input symbols were most relevant would be incredibly useful for a huge number of applications. So did these folks, and they apparently have created a keyboard whose keycaps aren't printed in the factory - they are tiny displays showing whatever glyphs/icons the computer wants them to show.
But unlike just a touch sensitive LCD panel, each key is still a distinct physical key - so you get tactile feedback.
I love the idea of a "Photoshop" keyboard. That alone could save a good amount of time for graphic artist type folks. Now if only Kinesis would license the technology and put it into their keyboards (I can't work without my programmable Kinesis!).
Amazon.com: Imaginarium.com: Playmobil - Security Check Point
No 4 year old child needs this toy. Mom and Dad just need to take him/her on a flight and they can experience first hand what it is like to be the woman traveler who is trying to gather her keys, spare change and baggage while running for a flight. Unfortuantely these poor children of today are going to grow up thinking the types of things which happen at airport security checkpoints (like stealing a guy's car keys) is normal behavior in a free country.
It isn't. It shouldn't be. A large part of it since 9/11 has been a charade to make the populace comfortable with a more authoritarian government. And to convince people we are safer. We aren't. Look at NOLA last week. *sigh*
Sometimes I think it may take another revolutionary war to steer this country onto a brighter path. Either that or the main population needs to get a clue.
Monday, September 5, 2005
If I fail this test the professor is going to recommend I drop his course and take a different class first. I have to take this class (and pass it with a 94%) this year, or I am out of the PhD program. It is only offered in the fall. So I either take it now and pass, or I am thrown out of the program for good.
There is a fallback - if I get a 90-93% I can likely stay in, study for an oral exam and try to pass the oral exam (given by a committee of faculty). I only get one shot at the oral exam. But if I do poorly in the class I will likely not be asked back in January.
Happy Labor Day indeed. :-(
CSCI-2300 and MATH-2800 are the requirements for CSCI-2400. These courses are entitled "Data Structures and Algorithms" and "Introduction to Discrete Structures". I took CSCI-2400, but apparently was able to graduate with an undergraduate degree in computer science without taking MATH-2800. Now I find that I don't have the math background necessary to complete this graduate level course without really struggling, as much of the math is beyond my current knowledge.
How did my advisor, nay better, how did the institute let me graduate with a degree in Computer Science without at least attempting what many computer science folks would consider a very essential course? Easy - this course never used to be a requirement! Now it suddently is for the graduate school. I am so screwed. Clearly the degree I earned previously from this institute was not very worthwhile, as now I am unprepared for the institute's own graduate school!
Thursday, September 1, 2005
Prices are going to go up everywhere - in every product. Our entire national economy is entirely dependent on diesel and our current pricing structure depends on it being cheap; around $1/gallon. But it is over $3 today. I suspect cost of goods is going to skyrocket next week or the week after.
If these prices stay into the winter, many folks in the northeast won't be able to afford to heat their homes. Or if they can heat their homes they won't be able to afford to eat (as the price of food is also going to increase dramatically). Towns aren't going to be able to keep the roads clear - the price of diesel for the plows is now 2x what it was last year. Taxes are going to have to go up this winter or next spring just to pay for road maintenance.
The crux of the problem is our lack of good mass transit - as many people have stated before. Boston, NYC - both have a decent mass transit system used by millions of people every day. Albany NY - you are lucky CDTA picks you up, let alone that it got you somewhere anytime today. *sigh*
Good thing my car is getting around 27 mpg. Even that is going to cost me more than $33/week in gas just between home and school. I never thought I would be counting the miles each day. Now I am. CDTA's weekly bus pass is $36. Sure I have to pay for the car insurance and the car loan, but my weekly fuel costs are about the same (or slightly less) than the CDTA bus pass - and I don't have to walk 2 miles between a bus stop and my destination.
Sunday, August 28, 2005
MacOS 10.3 and 10.4 uses CUPS - and CUPS doesn't support connecting to a remote LPR daemon that requires Kerberos authentication. MacOS 10.3 and 10.4 both support printing to an SMB print spooler, but for unknown reasons 10.4 can't actually print over SMB.
Due to the architecture of CUPS and MacOS X it is rather difficult for a CUPS backend to gain access to the user's Kerberos tickets. So I developed this solution. :-)
Installation is rather technical - if you aren't comfortable opening a shell and using a command line don't attempt to install this software.
Otherwise, here you go: cups2lprng-relay-1.0.tar.bz2
How it works
At login a Perl script is started and run in the background as a
The daemon creates a UNIX domain socket in /tmp which only the
user can access.
The CUPS backend (cups2lprng) is run as root by the CUPS printing
daemon. When the print job is received by the CUPS backend the
backend connects to the UNIX domain socket of the user who CUPS
claims submitted the print job. The print job is then copied over
the domain socket.
When the daemon receives a connection it starts up an LPRng process
to print the incoming print job.
Kerberos Tickets/Mach-O Chains
Other solutions have tried to run LPRng directly from within the
CUPS backend process. This has proved to be impossible with
Keberos tickets as the tickets are held in memory by the kernel
and are unavailable to the CUPS backend.
We get around the problem by running the daemon as the part of the
user login/desktop. The daemon will have access to the user's
I have seen a solution from NCSU which has a printer plugin that
copies the user's Kerberos ticket to a temp file, then passes that
temp file to the CUPS backend. This exposes the ticket on disk
for a short period of time - a risk. This solution avoids that
Wednesday, August 24, 2005
I'll likely start trying to write here more now that I will have slightly more free time - or slightly less - we'll see how it goes.
The main thing I am looking forward to is being able to start learning new material again... it has been a few years since I last took classes and I haven't really gained very much knowledge since I finished my undergraduate degree. I discovered it is very hard to maintain being on the cutting edge as a computing professional - most businesses won't tolerate employees spending time keeping abreast of their field. *sigh*
Wednesday, July 27, 2005
This concept of "better developers == better software" is something I have known for quite some time, but couldn't really prove. It is good to see that I am not alone in having a hard time proving the concept. It is also good to see that some very bright minds have come to the same conclusion as I, and that they likely got there first. :-)
Joel is right on the money when he is talking about why he formed Fog Creek Software. Good developers only want to be around good developers. They aren't challenged by working with mediocre developers. They can't innovate. They can't grow. Trying to keep a good programmer in a shop full of bad programmers is like trying to keep any Hollywood blockbuster star in horrible "B" level movies on the Sci-Fi network. It just doesn't work.
I found it funny that Joel uses writing IT software for a bank as an example of a place where good programmers won't be found. Funny because I work in an IT software group for a very large bank. Funny because I and anyone who knows me consider myself to be in the "good programmer" category. Funny because there aren't many others there who would fit that category. Fortunately we have had a few recent additions that just might. But I do worry about how long they will last.
We have noticed a huge improvement in code quality since I joined the project. But that's only because I have pruned lava flows, removed stovepipes, decreased vendor lock-in, fixed thousands of bugs nobody else could find or fix, rounded off the square wheels to fit nicely into existing round holes, etc. Had I not done it, it likely never would have been done.
But had the team been composed of 3 really good developers over the past 5 years of this project's life, instead of 9 so-so developers, the large amount of refactoring I have had to do lately never would have been necessary in the first place. We also would have a product that we could be selling in the marketplace to a larger number of the bank's customers. Instead it is a very targeted product serving only a very small number of customers. Our business unit has no capacity for revenue growth as our software can't support it. Neither can our development team.
Joel brought up the King David theory: you only need one good leader and an army of soldiers to carry out orders. The soldiers can be anyone; it is King David that determines success or failure. Like Joel, I disagree with this theory. Any leader needs a good support staff to yield successful results. The better the staff; the better the results will be. But it is even more important in the software industry, as software is still more of an art than it is a science.
But what if you have good people, but the leaders suck? It doesn't matter how good the team is, how well they get along, even if they can read each other's mind and finish each other's thoughts and ideas. If the leadership can't/won't support the team it really won't matter if the team would be able to produce the next killer-app. The team will walk away from the leadership long before the app is finished. But it will be a slow, painful process for everyone involved. Look at WinAmp. Joel danced around the destruction of NullSoft by AOL, but didn't really succeed at pointing the blame entirely on the AOL leadership.
I, and my fellow developers, share a cube farm from hell. I can't get more than 10 minutes of time without interruptions from people coming by. Consequently I only get my work done during "2nd shift" - between 7:00 pm and 11:00 pm. But I go in at 7:00 am. It is a looooooong day.
Our equipment is far too slow to keep up with us. I spend a good part of that 7:00 pm to 11:00 pm time waiting for the cheap IDE disk in my development system to compile our software. An extra $300 for a RAID-0 would have gone a long way. It would at least annoy me less knowing that the leadership actually thought my time was worth more than $300.
I spent the past 2 days hand-merging files that CVS would have done automatically in minutes. I find it very disturbing that the leadership feels my time is worth less than the license costs of GPL'd software.
So I'm pretty much in some of the worst working conditions available for a software developer in the US. About the only way to make it worse would be to have the office in a major subway station in NYC. But on second thought that might actually be an improvement - you wouldn't be able to hold conversations or meetings due to the noise of the crowd and the trains; so you can just wear headphones and focus on the computers instead. Nobody stopping by to interrupt you.
Bad working conditions leads to good programmers going mad. Good programmers going mad leads to bad software, or the departure of said programmers from the team. Either leads to a smaller profit as costs are higher than they should be. But some bean-counter somewhere is happy with the figure we produce so our leadship is keeping the status quo. And this programmer is going mad.
I've just about come to the conclusion that big companies are a very bad idea. Once you reach a certain size the status quo is good enough for such a large precentage of the company that profit margins will fall off just because a bean counter in one group feels another group should spend $3 less this year on post-it notes, thereby resulting in the affected team losing customer information more frequently, and consequently losing customers and they revenue they used to bring in.
It is a good thing our bean counters are saving $300/developer by sticking with the cheaper IDE drives, as I think they lost at least $5-10 million in gross revenue this year because our development team can't support another customer.
Its long since expired.
But I'm still waiting. Why? Because NetSol has locked the domain and is trying to sell it to me at their prices. I can't purchase it again through another registrar. I can't transfer it without purchasing it from NetSol first. If I purchase it from NetSol and then transfer it the other registrar won't give me credit for the registration I paid NetSol. Not to mention the fact that I was trying to avoid paying NetSol's high fees in the first place...
*sigh* So that domain is out in limbo, and may be that way for a long time. Just so NetSol can cybersquat on it. A practice which I thought they weren't very fond of a few years ago when the domain name speculators were registering names left and right.
But then there's the simple fun fact that NetSol is the group of extremely intelligent people who brought the entire .com, .net and .org domain tree to its knees with their "innovative" SiteFinder crap. Good thing that's dead. Or is it?.
Why do I still do business with these morons?
I'm so annoyed with the product - and the people who "support" it within our group - that I could jump out the 16th floor window at the office any time now. I need a vacation from it. A permanent vaction. Most of the rest of our development staff spent the better part of this week getting their own build areas to work as well. Our company lost about 7 man weeks this week due to our love of PVCS Version Manager. *sigh*
Thursday, July 7, 2005
Who the hell designed this POS operating system? In this day and age remote administration of anything should be an integrated feature available to all users, not a hack-on-the-side like VNC is for Windows. And a simple reboot request shouldn't take 30 minutes to be entered, then another 30 minutes to be carried out. At this point I could have been 2/3 of the way through the movie, but I have yet to get it playing.
Of course I could just be using a normal consumer DVD player and video scaler instead of this HTPC rig, but what fun would that be? Besides - when I aquired the equipment a video scaler was much more expensive then the HTPC was, and the HTPC is able to generate an image that looks just as good on my projector. I wasn't about to spend more than I needed to.
Monday, July 4, 2005
What about people is it that causes them to ruin things for other people? As a child I often was picked on (ruining things for me) just so some other people could have some fun. I guess its the same idea here. Morons.
I finally finished watching Firefly over the weekend, and I really must admit, I'm sad the show was cancelled. Much better than anything else that has aired in recent past. I guess the American public is generally stupid enough to prefer "reality" shows like Fear Factor over real creativity, like Firefly. *sigh*
Sunday, June 19, 2005
But the movie is coming out Sept. 30. I'll most likely wind up waiting for it to come out on DVD (and will buy the DVD) as watching movies in the movie theaters just isn't fun anymore.
Tuesday, June 14, 2005
BetaNews | IBM Turns to Open Source Development (via Slashdot)
What was really interesting however wasn't the boxes, but the OS. An entirely new platform which provided a very effective multimedia system. The backbone was a system composed of a large number of threads and clean APIs.
Be died when Apple didn't buy them (they bought NeXT instead) and Gateway and Dell couldn't ship BeOS for x86 on their systems due to their contracts with Microsoft (monopoly anyone?).
It looks like a Germany company has finally brought back the ideals of the BeOS:
yellowTAB - Makers of ZETA . Whoopie. Gateway and Dell still can't ship it.
Sunday, June 12, 2005
This is much better than my Yahoo! search ranking, which was number 3 for the term "ready to assemble bookcase". I used to get over 60% of my web traffic from that term at Yahoo!. It looks like they finally updated their index and I'm no longer listed on the first page. This is good, as those poor visitors were looking at a failed home theater project rather than shopping for inexpensive bookcases.
The caller launched into an opening statement about how great this year has been and how the party has come together, and how wonderful it was that Senator Tom Dashell was voted out of office. The caller asked me if I agreed.
When I replied I didn't think the past year has been that great, the caller asked "why not?". I've never heard a telemarketer hang up so quickly when I said "I'm not sure I'm a Republican anymore."
Apparently the Republican party doesn't have any interest in keeping its prior members. Instead it is only interested in those who haven't started to question the party's policy decisions. At least they didn't keep me from my movie about Bill Clinton's presidential campaign.
Thursday, June 9, 2005
With the expanded PATRIOT Act powers recently being approved by the Senate one can't help but wonder if Lucas timed Star Wars III's release a little too closely with our own loss of liberties.
Then there are those who lived through McCarthy-ism who say we're still better off today then we were in the 1950s, so its still A-OK in the U.S. of A. That's fine and good, but we are now worse off than we were in the 1990s. Why are we going backwards, even if it is by a small amount?
Oh that's right - the Saudi oil fields may be drying up, so its time to use America's troops to invade Iraq and secure future oil revenues for our friends in the House of Saud.
Wednesday, June 8, 2005
Here's my new photo catalog for Brown Bear Theater. I'll create the others soon. I may also post some wedding photos!
I think I only previously saw season 1, and I think I missed season 2 entirely. Fortunately they sell the DVDs. I might have to pick them up.
Unfortunately my wife doesn't get their humor and hates it when I watch Red Vs Blue when she is home. Her loss.
Monday, June 6, 2005
Buying right before the switch is a bad investment as your equipment will become obsolete at least 2x faster than at any other point in its history. Buying right after the switch is a bad idea as the equipment and OS tend to not be as stable due to their immaturity. sigh
MacNN has coverage of the keynote from Steve Jobs.
Too bad I purchased a Macintosh and helped to prop up Apple as they pulled the rug out from under me for a second time.
Where is that fully open PC platform with great laptop hardware? That plus FreeBSD or Linux looks like the only sure bet towards long-term support.
I just purchased a 15" PowerBook in late December. I was hoping it would last me the next 3-5 years.
Last time I purchased a Macintosh computer was in 1994. Apple had just introduced the PowerPC line, but I didn't have the funds for one of the new PowerMacs - so I purchased a Quadra. The Quadras were based on the 68040 processor, and were the last computers shipped by Apple using that CPU. Apple had just started the transition to a new processor platform and I got left behind from day one.
In 1997 when it was time for me to upgrade I went with an SGI O2, as SGI had been on the MIPS architecture for years and showed now signs of leaving it.
A year or so later SGI announced they were eliminating the MIPS processors from their lineup and moving to Intel processors. This didn't worked out very well for SGI, and I was stuck with a MIPS based SGI. SGI is just barely alive. I'm surprised they haven't closed their doors yet thanks to that (and other) disasters.
The SGI/Intel event prompted me to move to x86/Linux for several years.
But now I'm back on the PowerPC based Macintosh, as I just got fed up with the GUI environment on Linux not working well, and most laptop hardware is crap (when compared to Apple's current offerings).
So I guess its just about time for Apple to switch architectures on me. Again. At least I purchased Tiger.
Saturday, June 4, 2005
LATEX source document residing in my Subversion repository which is formatted into HTML by latex2html. I have written a small wrapper Perl script which cleans the output from latex2html into slightly more readable HTML, adds CSS classes to facilitate presentation in Movable Type, then posts the entry into Movable Type through the Perl API Net::MovableType.
What‘s also very cool is the graphics drawn in LATEX code will be automatically converted by latex2html and uploaded into the archive directory. Therefore I can do equations or graphics in
LATEX and they will appear correct on the website. For example here is the fraction “1 over 2” as an equation:
Nifty, isn‘t it?
What‘s great is that I no longer need to worry about HTML formatting,
or really any other presentation issue while I am writing an article.
I can just focus on the content. This is perhaps the greatest strength of LATEX and why I can‘t seem to give it up as a means of presenting my thoughts.
I won‘t really use the posttex script for small entries such as quick links, etc. as it is too much work to write up a LATEX source file and check it into SVN. But I do plan on writing some longer entries at least once week from now on, and these will all be written in LATEX. If space on my webserver doesn‘t prove to be an issue I may also offer PDF forms of the longer articles.
Today though Sara and I were out shopping for cars. It was 89 degrees out and we were getting thirsty. While doing a U-turn in a shopping center plaza (can't make a left turn across a very busy 4 lane road in that part of town) she suggested we stop at McDonalds so she could get a milkshake. The milkshake idea sounded good, right? It promised to be cold. I was hot. The sign at the drive-thru said "2 smalls for $3.00". How wrong could you go?
Couple of hours later my stomach is killing me and I suddenly remember why I don't get milkshakes there anymore. I can be such a moron sometimes. I should have just gotten a small soft-drink instead.
Friday, June 3, 2005
I plan on hooking the script up to Subversion and having it automatically publish tex article files when they get created or get updated. This should make it very easy for me to write up an article, commit it to SVN and let it move its way up into my webspace without me really having to think about it.
What's the point in paying the Compaq price premiums and support contracts if the business is still doing the support itself? My best guess is the business doesn't want to hire the quality of staff it needs to maintain its own computers, so it hires slightly lower quality staff who can replace a Compaq power supply via the pretty pictures in the Compaq instruction manual. We're just paying Compaq to give us pretty manuals instead of paying for some staff training.
At every other company that I've worked at in the past we've supported our own computer equipment. We generally had less down time and were able to afford faster disk drives, CPU, more memory, etc. for the same cash amount. As my earlier note today points out, spending a few more dollars on your disks can really decrease how much time your developers waste on a daily basis. (Yes, I'm waiting for a build to finish right now; its still going. sigh).
I guess losing almost 1/4 of my day (every day) to waiting for a cheap 5400 RPM IDE drive is worth the $500 the company saved by not purchasing me a faster drive for my system.
When will the bean counters at large companies realize that developers have high disk IO demands (when compared to other business-type users, like most Word/Excel users) and actually purchase systems which can keep up with our work?
Wednesday, June 1, 2005
As a developer I have found PVCS Version Manager to be useless and a huge time sink. It is a horrible tool. Its retail price is more expensive then Perforce for a development team of 20 people. It lacks the most basic features. Its garbage and should be pulled from the market.
Developers can't be given the security rights necessary to create new files; consequently I have to write new code then send a bug report for each file to our configuration management team, who creates the file for me in the version control system. Ditto for deletes. This creates a headache for me if I send a bug report to have the file created but then find and fix a bug in the file before it has been added to Version Manager. (Given that the average turn-around on new file creation is 48 hours, this is rather common.) I have no idea which files I need to now check out of Version Manager to check in bug fixes and which files are OK as-is.
Did I mention this is even worse than it would appear as Version Manager offers no way to compare your working copy of the source code to a version in the repository? Sure it offers a single file difference between your working file and the repository version of the file, but what it can't do is give me the differences (if any) for all 8000+ files which make up our small product. Most of them don't typically have differences, but how do I know which ones have them if Version Manager can't tell me?
Did I mention that when another developer requests a file to be deleted from the repository and our configuration management team follows through with the bug report that was sent to them, Version Manager won't remove it from my working directory? Yea... what's up with that? It will happily leave all deleted content in my working directory until the cows come home, pigs fly, or I delete all source from my local machine and refetch it all from the version control system. I think we've lost some 100 man hours over the past 8 months to this problem alone.
Writing up the bug reports for file creations and file deletions is also a huge time sink. There's no automated process to do this, so developers are writing down on pen and paper the names of the files they are creating as they create them; then a week or so later writing out bug reports for each one and submitting them to the configuration management team. We've estimated most developers are spending about 30 minutes to an hour each week doing this activity. Over the span of 8 months this is 240 man hours (0.75 hours * 10 developers * 32 weeks).
We also can't have multiple developers editing the same files at the same time. If I'm adding a new constant at the bottom of a file and Bob is adding a new constant near the top of the file (at least 50+ lines from where I'm adding my new constant) I have to either wait for Bob to finish or make the change on my local system, then days later figure out my file is missing Bob's change and redo Bob's change manually. Between schedule slippage of developers having to wait for a file to be released for them to modify it, or wasted man-hours redoing work which had previously been done (but couldn't be automatically merged by Version Manager as it lacks this ability) each developer has lost ~30 minutes each week for 8 months. That's 160 man hours.
I've seen developers spend an entire day because they can't get their local working copy to compile anymore. This typically happens because they have a partial update from Version Manager. Typical example is the developer has modified file X.java (but hasn't checked in yet); another developer has modified A.java, B.java, C.java, ... E.java, and checked all of those files in. The changes in A-E are dependent on each other, but the dependencies aren't immediately obvious. The developer working on X.java realizes he also needs to modify E.java and checks it out for modification... now they have part of the A-E change, but not all of it, and suddenly they can't compile code anymore or test. Because they can't find out how their local directory differs from the version control tool they aren't willing to just delete the directory and start over (for fear of losing their change in X.java and having to start over from scratch). But they also can't find out that the change in E.java depends on A.java through D.java, as Version Manager can't tell them these files were modified at the same time. This has cost the project at least 8 hours of a developer's time once every couple of weeks over 8 months: 128 man hours.
By default Version Manager will set the modification date of source code files you are getting from the repository to be the modification date of the file when it was checked in by the last author. This is an [sarcastic]awesome feature[/sarcastic] when you are using Ant as a build tool, as Ant makes compilation decisions based on the *.java file being newer (has a later modification date) than the *.class file. If I compile A.java this morning, but Bob then checks in a new version of A.java which he wrote yesterday (and hasn't modified since) and I then get the new version from Version Manager, Ant won't recompile it as A.java is still older than A.class (the source has yesterday's date on it, while the class has today's date).
After discussing this with our local configuration management team they told me the Ant build system was broken; build systems which drive off modification dates on files can't be trusted for incremental builds and another solution should be used. In classic form they couldn't offer up any alternatives. I agree that file modification dates are problematic, but I've rarely had trouble when working only on a local filesystem (NFS without network time servers is a whole other story). Every other version control system that I've used in the past (except ClearCase) always sets the modification date of the source files to the date/time the file was gotten from the version control system, ensuring that a date/time based build system will recompile it properly. (ClearCase gets away with not doing so by providing ClearMake, a build process which drives off the version information stored by ClearCase.)
So given that I was forced to add an a task to Ant which compares file contents by MD5 checksum, then touches the files to move their modification date forward to the current date/time prior to running any other tasks (such as javac). This was 4 hours of development time wasted, and now our build process takes 5 minutes instead of 2 minutes. Given that I try to do about 10-15 builds per day I'm losing half an hour each day to waiting for the build process to update the file modification dates. Other developers are in the same boat.
Because we can't do any reasonable differencing to determine which files we need to write bug reports for configuration management to modify for us, I've had to add special Ant tasks which scan for all files which aren't read-only in a directory and copy them off to a holding area. The holding area serves as a place for the developer to run 'diff' from on the command line or from within the Eclipse IDE to determine what they need to send to our configuration management team. We've spent at least 1.5 person-days on this little set of scripts; 10 developers actively working with them (and producing on average 8 new holding areas per day - 1 per hour) has cost us nearly 20 GB of disk space on our file server. It has also entirely bypassed the security system of Version Manager, as now anyone with access to our file server can read the source code. (A wider user base has access to the file server than to PVCS Version Manager.)
With a (burdended) developer cost of ~$70/hour we're talking about $92,960 (1,328 man hours) of wasted developer resources. Unfortunately management doesn't see these hours as a real cost, but the project is 2 months behind right now. With 10 people working on the project we're behind by at least 1 month thanks to our choice of version control tools. That's just the last 8 months. We've had this tool for years.
Early on in this project we tried to branch our source code base. Version Manager can't handle branches. The vendor claims it can, but it really can't. Our local configuration management team was initially unable to setup a single branch of development properly. Developers were forced to spend a couple of weeks to get our configuration management team to correct the system; for the next couple of months we kept finding files which had been branched improperly. This likely cost us another 200 man hours between developers and configuration management: $14,000.
We could have purchased Perforce, ClearCase, BitKeeper, etc. for a fraction of the cost of the time we have thus far wasted, and we would have only been 1 month behind, not 2 (the other month loss is more likely due to a few ill-defined requirements which were incorrectly implemented early in the project). We can't ship this project late, we're contractally obligated to deliver it on time. If we don't our customer will face a major revenue loss at a time which it can least afford to have one.
I miss the days of CVS. CVS for all its warts just freaking works. It could have saved us nearly $100,000 in man-hours in the past 8 months, kept the project one month less behind sechdule, and saved a whole lotta frustration.
To the PVCS VM developers: Have you even freaking looked at CVS? Perforce? ClearCase? Aegis? I realize some of these products are commercial and may be difficult for you to do competitive analysis on, but CVS is the defacto standard in open source development and its under the GPL. So long as you aren't stealing source code from it it is perfectly acceptable to study. Now that Subversion, darcs, and GNU arch are all stable you might want to also look at those. Oh and don't forget about git, which the Linux kernel team wrote in 1 month because of the whole BitKeeper fiasco. And lets not forget about BitKeeper, which is currently (in my opinion) the best version control tool available on the market.
Joel's right on target - you can learn a lot about someone just by looking at what they have read. The key here is what they have read of course, not what is on their bookshelf. Heck, I have a stack at least 3 feet tall of books I have received from publishers that I haven't even had time to read yet. I suspect only half of them will be any good anyway.
I fell on Joel's book review list and he's got some good things to say about some of these books, and many of them I haven't read yet. In particular Peopleware: Productive Projects and Teams. I just might have to obtain a copy, read it, and weep, because I'm sure my workplace ignores every good part of this book.
Fortunately I'm a better programmer than I am graphic designer.
But we just had to go see Star Wars Episode III when it came out. At midnight on opening day no less. $20 later I have two tickets in hand; another $10 and we have a gallon of soda and a feed bag of stale popcorn to share between the two of us. We could have fed a family of four off just the soda and popcorn (just not a very healthy meal).
The sound was too loud; the seats weren't comfortable (not anything like our recliners at home!); they keep the lights up too bright (when the screen is black the room should be black dangit!). *sigh* And lets not even talk about the Christie digital projectors which are being installed around here to show "The Twenty", a 20 minute TV infomercial we were subjected to watching because we arrived a little bit earlier than the posted start time. The Christie DLPs just don't have enough resolution to not have the screen-door effect visible to me from any seat in the theater except the last two rows, which are some of the worst seats in the room. There's nothing more entertaining then counting the number of pixels which make up the "T" in "The Twenty", or the "N" in "NBC", or better, a popular actress' nose. My wife hates going with me now as I wouldn't shut up about it. At least they showed the movie on film and not the DLP. I would have left and demanded a refund if it was on the DLP.
With DVDs at $15/each during the first week/two-weeks after release and a Netflix membership at $15/month there's no reason for me to go to the theaters anymore. Its just not worth it. Even if I really am interested in seeing the movie; it will look better on DVD at home and I will certainly be able to enjoy it more.
I've wasted many hours sitting in here watching movies. Unfortunately my DVD collection isn't large enough to really make good use of it; no matter what you thought about Star Wars II: Attack of the Clones the movie does get boring after a while! That's why Netflix rocks. We just recently rejoined after over a year without a membership.
On an 8 foot wide screen DVDs look great, but cable TV looks like garbage. So I'm just going to cancel my cable subscription and just use Netflix. There's very little we watch on TV anyway that won't be on DVD within 12 months so I might as well just wait and see it on DVD on my 8' wide screen.
I was planning on quickly hacking together a blog engine based on latex2html and some XSLT to construct article entries from LaTeX source. I'm very much against writing HTML by hand and I'm such a vi nut that I really hate using any text editor other than vi... but MovableType nicely offers up a free version, so I thought I should at least give it a try. At $69 or whatever for the personal edition it may be a good buy if I really start to use it.
Of course now I realize that with my content stored in a database on the web server I can't automatically include it into my nightly backup routine of my fileserver at home. *sigh* I guess this means I now have to rig up a crontab on my home fileserver to SSH into the web server, dump the database, then drag the file back prior to my normal filesystem backups starting up. Being your own sysadmin is such a time consuming task... at least I don't also maintain the webserver, my long-time friend Andrew at Plexpod does that for me! :-)
Tuesday, January 11, 2005
Originally I wrote irix-dtrrts for Irix 6.2 so my O2 could communicate with an X10 Firecracker which I received as part of a bundle through a promotional offer. At the time Irix couldn't control the DTR and RTS lines of an RS-232 port as were need to signal the Firecracker, so I wrote a small in-kernel device driver for Irix 6.2 to perform the DTR and RTS signalling as required by bottlerocket, the command line program used on UNIX systems to control Firecrackers.
I seriously doubt that irx-dtrrts would work on Irix 6.5 or later. I certainly haven't tried. :-)