Funny as hell...
http://www.bloggingwv.com/corn-fed-venison-it-looked-good-on-paper/
Saturday, May 31, 2008
Tuesday, May 27, 2008
Religious discrimination...
PART IV. CRIMES, PUNISHMENTS AND PROCEEDINGS IN CRIMINAL CASES
TITLE I. CRIMES AND PUNISHMENTS CHAPTER 272. CRIMES AGAINST CHASTITY, MORALITY, DECENCY AND GOOD ORDER Chapter 272: Section 36. Blasphemy Section 36. Whoever wilfully blasphemes the holy name of God by denying, cursing or contumeliously reproaching God, his creation, government or final judging of the world, or by cursing or contumeliously reproaching Jesus Christ or the Holy Ghost, or by cursing or contumeliously reproaching or exposing to contempt and ridicule, the holy word of God contained in the holy scriptures shall be punished by imprisonment in jail for not more than one year or by a fine of not more than three hundred dollars, and may also be bound to good behavior.
http://www.mass.gov/legis/laws/mgl/272-36.htm
Apparently, atheists are prohibited from holding office in...
http://www.freethoughtpedia.com/wiki/Anti-atheist_laws
The most bizzare part is not that there are these laws - they mostly date to 1800s. It's that they have not been challenged yet!..
Update: this article from Rolling Stones should be good for a decade in jail in Massachusetts... http://www.rollingstone.com/politics/story/20278737/jesus_made_me_puke/print...
TITLE I. CRIMES AND PUNISHMENTS CHAPTER 272. CRIMES AGAINST CHASTITY, MORALITY, DECENCY AND GOOD ORDER Chapter 272: Section 36. Blasphemy Section 36. Whoever wilfully blasphemes the holy name of God by denying, cursing or contumeliously reproaching God, his creation, government or final judging of the world, or by cursing or contumeliously reproaching Jesus Christ or the Holy Ghost, or by cursing or contumeliously reproaching or exposing to contempt and ridicule, the holy word of God contained in the holy scriptures shall be punished by imprisonment in jail for not more than one year or by a fine of not more than three hundred dollars, and may also be bound to good behavior.
http://www.mass.gov/legis/laws/mgl/272-36.htm
Apparently, atheists are prohibited from holding office in...
- Arkansas (Constitution Of The State Of Arkansas Of 1874. Article 19. Miscellaneous Provisions. § 1. Atheists disqualified from holding office or testifying as witness. No person who denies the being of a God shall hold any office in the civil departments of this State, nor be competent to testify as a witness in any Court.)
- Maryland (Article 37 of the Declaration of Rights of the Maryland Constitution That no religious test ought ever to be required as a qualification for any office of profit or trust in this State, other than a declaration of belief in the existence of God; nor shall the Legislature prescribe any other oath of office than the oath prescribed by this Constitution.)
- North Carolina (North Carolina State Constitution, Article VI, Section 8: Sec. 8. Disqualifications for office. The following persons shall be disqualified for office: First, any person who shall deny the being of Almighty God.)
- South Carolina (South Carolina State Constitution, Article VI, Section 2: No person who denies the existence of the Supreme Being shall hold any office under this Constitution.)
- Tennessee (The Tennessee Constitution, Article IX, Section 2 No person who denies the being of God, or a future state of rewards and punishments, shall hold any office in the civil department of this state.)
- ...and Texas (The Texas Constitution, Article I, Section 4: No religious test shall ever be required as a qualification to any office, or public trust, in this State; nor shall any one be excluded from holding office on account of his religious sentiments, provided he acknowledge the existence of a Supreme Being.)
http://www.freethoughtpedia.com/wiki/Anti-atheist_laws
The most bizzare part is not that there are these laws - they mostly date to 1800s. It's that they have not been challenged yet!..
Update: this article from Rolling Stones should be good for a decade in jail in Massachusetts... http://www.rollingstone.com/politics/story/20278737/jesus_made_me_puke/print...
Tuesday, May 20, 2008
Panasonic HDC HS9 product review
We've got a new 1080p camcorder yesterday - Panasonic HDC HS9, just in time for the summer vacation season.
This thing has got to be a technological marvel - it captures and encodes video in 1080p format - it hasn't been more than 2 years when your typical computer could not even PLAY in this resolution, let alone record. This records full resolution on the fly, and stores the results on a 60GB hard drive.
Good things first.
The specs are great for the price - 1080p recording, 3 CCDs, 60GB hard drive. It is also small and light - the size of a palm. It produces very decent videos outdoors, in full daylight (but see below).
The sound it records is excellent - really. 5.1 surround sound, and it really truly sounds like a good 5.1 surround recording.
And it is relatively inexpensive at ~$750.
The best thing about this camera is the convenient output - it basically produces BluRay disk file structure. You can copy the directory from the camcorder and drop it on the PowerDVD HD window, and it just plays. The interface is USB, it appears as an external hard drive, so it's drag and drop to save the files, and then drag and drop to play them. Very nice.
Now, the bad things. In general, the design of this thing is terrible - across the board. I am sorry to say this, but the UI designers for it were idiots - there's no other word to describe this.
For starters, the USB connector is behind the LCD screen. So to connect it to a PC one has to open the screen, peel away a little plastic cover, and connect the USB cable. And while it is connected, the LCD screen stays open. If you drag the cable and it falls on the floor, the screen is almost guaranteed to break off.
This is not all. If you have it connected to a PC, you MUST have it on external power - it cannot use USB, OR its own battery. Now the power connector is behind the battery - to connect it, you have to take the battery away.
So when you're copying the files, you have two cables plugged into the thing, the little plastic USB slot cover hanging on its plastic strip, the LCD screen is open, and the battery is lying to the side. What a mess.
This of course also means that the you can't charge the battery while it's inside the camcorder. You have to take it out and insert it into the charger. You might hope that you can charge it while copying the images from the camera, but no. The charger does not charge the battery if the camera is plugged in. How silly is this?
It actually does matter. The thing produces ~1.3GB every 10 minutes, and the battery resource is ~70 minutes, after which it takes 1.5 hours to charge. Also, 1.3GB video takes roughly 2.5 minutes to copy off to the PC, so for 70 minutes worth it is another 20 minutes. If they were to allow charging while camcorder is connected, it would mean 20% savings in time it takes to get the camcorder ready to shoot again.
Now, there IS space on the case where the connectors - power and USB - might have belonged more logically. This space is used by an enormous SD card slot. According to the manual, the purpose of SD card slot is to shoot where the hard disk cannot be used - at the elevations of above 3000 meters (for Seattleites, it's just below Camp Muir on Mount Rainier), or in high-vibration environments such as a dance club.
I would much rather have SD slot hidden, and the USB connector exposed, on the expectation that most people would be using the hard disk most of the time, and even when they do use the SD card they would still use USB to transfer the data.
Speaking of the manuals... real men don't use them, do they? Well, I'd love to see a real man using THIS camcorder :-). Without reading the manual, this thing is utterly useless. Looking at the menu system, you cannot ever guess what is where. The time settings for example are spread across two menus - setting the time and the time zone (which is not called time zone by the way) is in "Basic" menu, setting the time format is in "Setup".
I still did not find where one changes video options such as low light settings - the camcorder offers it as a prompt when it detects the low light situations, and this is how I select it, but I have no idea how to find it in the menus. Neither I know how to turn the light on for the low-light setting. I didn't get to it in the manual yet.
These are the glaring UI problems - there are plenty of minor nits. For example, the charger shows a green LED when it charges, and it goes out when the charging is done. Most consumer electronics things that I owned either blink the LED when it charges, or show the yellow LED that goes green when the charging is done.
The UI problems transcend the device itself.
The software tries to use skins, except the message boxes that it shows are not skinned. They are just plain stupid Windows message boxes and look out of place in the UI that otherwise tries to look like a media player. And they are everywhere - any action brings up a message box (you click on "transfer video" button - the message box opens saying "Transfer video?" (Yes/No)). Kinda like Vista, only worse.
The default location where software copies the data - on XP at least - is "All Users\My Documents\My Pictures". Good luck finding this in YOUR "My Documents" folder after this. Both videos and pictures go there, in a strange structure - the videos for example are in the PRIVATE folder. How DID they know what it was I was shooting?!
There are a few seemingly arbitrary limitations in the software - it plays media from the SD card (while it is in the unit), but not from the unit's hard disk, for example. Instead it for some reason does support playing it from a DVD. Why bother with this piece of functionality?
The error paths in the software are not tested. This is what it produces when you try to install it on Media Center.
A worse problem than the terrible UI is its performance in the low-light situations. As I wrote it records excellent videos in daylight. But in the evening the videos become grainy and blurry at the same time, as if it had problems with auto-focus. Here is a frame from a video shot at daylight:
And here's one taken during the evening (yes, I was using the low light mode):
I must admit though I do not have a frame of reference on this though - I don't know how other camcorders perform in similar situations. The review on Amazon says that Sony produces equally terrible images in low light conditions.
So net/net I would give it 3.5 stars out 5 - just barely enough to not contemplate returning it back where I got it from.
This thing has got to be a technological marvel - it captures and encodes video in 1080p format - it hasn't been more than 2 years when your typical computer could not even PLAY in this resolution, let alone record. This records full resolution on the fly, and stores the results on a 60GB hard drive.
Good things first.
The specs are great for the price - 1080p recording, 3 CCDs, 60GB hard drive. It is also small and light - the size of a palm. It produces very decent videos outdoors, in full daylight (but see below).
The sound it records is excellent - really. 5.1 surround sound, and it really truly sounds like a good 5.1 surround recording.
And it is relatively inexpensive at ~$750.
The best thing about this camera is the convenient output - it basically produces BluRay disk file structure. You can copy the directory from the camcorder and drop it on the PowerDVD HD window, and it just plays. The interface is USB, it appears as an external hard drive, so it's drag and drop to save the files, and then drag and drop to play them. Very nice.
Now, the bad things. In general, the design of this thing is terrible - across the board. I am sorry to say this, but the UI designers for it were idiots - there's no other word to describe this.
For starters, the USB connector is behind the LCD screen. So to connect it to a PC one has to open the screen, peel away a little plastic cover, and connect the USB cable. And while it is connected, the LCD screen stays open. If you drag the cable and it falls on the floor, the screen is almost guaranteed to break off.
This is not all. If you have it connected to a PC, you MUST have it on external power - it cannot use USB, OR its own battery. Now the power connector is behind the battery - to connect it, you have to take the battery away.
So when you're copying the files, you have two cables plugged into the thing, the little plastic USB slot cover hanging on its plastic strip, the LCD screen is open, and the battery is lying to the side. What a mess.
This of course also means that the you can't charge the battery while it's inside the camcorder. You have to take it out and insert it into the charger. You might hope that you can charge it while copying the images from the camera, but no. The charger does not charge the battery if the camera is plugged in. How silly is this?
It actually does matter. The thing produces ~1.3GB every 10 minutes, and the battery resource is ~70 minutes, after which it takes 1.5 hours to charge. Also, 1.3GB video takes roughly 2.5 minutes to copy off to the PC, so for 70 minutes worth it is another 20 minutes. If they were to allow charging while camcorder is connected, it would mean 20% savings in time it takes to get the camcorder ready to shoot again.
Now, there IS space on the case where the connectors - power and USB - might have belonged more logically. This space is used by an enormous SD card slot. According to the manual, the purpose of SD card slot is to shoot where the hard disk cannot be used - at the elevations of above 3000 meters (for Seattleites, it's just below Camp Muir on Mount Rainier), or in high-vibration environments such as a dance club.
I would much rather have SD slot hidden, and the USB connector exposed, on the expectation that most people would be using the hard disk most of the time, and even when they do use the SD card they would still use USB to transfer the data.
Speaking of the manuals... real men don't use them, do they? Well, I'd love to see a real man using THIS camcorder :-). Without reading the manual, this thing is utterly useless. Looking at the menu system, you cannot ever guess what is where. The time settings for example are spread across two menus - setting the time and the time zone (which is not called time zone by the way) is in "Basic" menu, setting the time format is in "Setup".
I still did not find where one changes video options such as low light settings - the camcorder offers it as a prompt when it detects the low light situations, and this is how I select it, but I have no idea how to find it in the menus. Neither I know how to turn the light on for the low-light setting. I didn't get to it in the manual yet.
These are the glaring UI problems - there are plenty of minor nits. For example, the charger shows a green LED when it charges, and it goes out when the charging is done. Most consumer electronics things that I owned either blink the LED when it charges, or show the yellow LED that goes green when the charging is done.
The UI problems transcend the device itself.
The software tries to use skins, except the message boxes that it shows are not skinned. They are just plain stupid Windows message boxes and look out of place in the UI that otherwise tries to look like a media player. And they are everywhere - any action brings up a message box (you click on "transfer video" button - the message box opens saying "Transfer video?" (Yes/No)). Kinda like Vista, only worse.
The default location where software copies the data - on XP at least - is "All Users\My Documents\My Pictures". Good luck finding this in YOUR "My Documents" folder after this. Both videos and pictures go there, in a strange structure - the videos for example are in the PRIVATE folder. How DID they know what it was I was shooting?!
There are a few seemingly arbitrary limitations in the software - it plays media from the SD card (while it is in the unit), but not from the unit's hard disk, for example. Instead it for some reason does support playing it from a DVD. Why bother with this piece of functionality?
The error paths in the software are not tested. This is what it produces when you try to install it on Media Center.
A worse problem than the terrible UI is its performance in the low-light situations. As I wrote it records excellent videos in daylight. But in the evening the videos become grainy and blurry at the same time, as if it had problems with auto-focus. Here is a frame from a video shot at daylight:
And here's one taken during the evening (yes, I was using the low light mode):
I must admit though I do not have a frame of reference on this though - I don't know how other camcorders perform in similar situations. The review on Amazon says that Sony produces equally terrible images in low light conditions.
So net/net I would give it 3.5 stars out 5 - just barely enough to not contemplate returning it back where I got it from.
Are you on the list?
"Senior government officials have leaked detailed information about a database of 8 million Americans targeted for detention in case of a declared national emergency."
http://www.dailykos.com/storyonly/2008/5/20/21950/7576/933/518756
http://www.dailykos.com/storyonly/2008/5/20/21950/7576/933/518756
Monday, May 19, 2008
Premature optimization is the root of all evil
Every once in a while I come across a code review where there is a small inefficiency in the code which can be easily corrected, but where an author invokes the ghost of "premature optimization" to justify keeping it this way.
Today's example (Java)...
This does the hash lookup twice - first when checking whether it contains the key, and second when retrieving the value. In the case where objects stored in the map are never null, the time spent in this code can be cut in two by just doing this:
Is this a premature optimization?
What about this (C++):
"*out = in;" being something on the order of several hundred of instructions (http://1-800-magic.blogspot.com/2008/04/stl-strings.html), the same code can be rewritten, without the loss of readability, as follows:
Would this be a premature optimization as well?
The term premature optimization, originally applied by Knuth and Hoare to making design trade-offs to optimize to clock-level efficiency is most often misapplied to mean that how you write the code does not matter - we'll figure out what's slow and optimize it later.
There are a few problems with this approach. First, as Hoare writes himself...
"I've always thought this quote has all too often led software designers into serious mistakes because it has been applied to a different problem domain to what was intended. The full version of the quote is "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." and I agree with this. Its usually not worth spending a lot of time micro-optimizing code before its obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine tuning at a later stage will fix any problems."
(Emphasis mine.)
There are certain systemic decisions that a designer makes before writing the code (do I use STL? Do I use Java?) that are extremely difficult to undo once the code is complete - the performance/memory/code size implications may end up spread across the entire system - perhaps across multiple layers of the software stack - and are extremely hard to localize and "fix".
What's worse, the "fix" usually is an even worse hack.
Consider a developer who applies Java to the wrong application domain (http://1-800-magic.blogspot.com/2007/11/domain-languages.html) - e. g. media processing - and ends up with a very slow system.
This developer might be compelled to rewrite parts of it in C++ and sprinkle the native code in various parts of the otherwise managed system where he or she has found one could gain the most performance improvement by profiling.
Of course this makes the design more complicated and the code less readable - now we have some parts of the system implemented in one language, and other seemingly random parts - in another. This flies right in the face of the originally stated goal of "cleanliness" of the design. It also makes debugging and correctness verification tasks much harder.
Second point I want to make is the implementation cleanliness. While one might argue that checking whether the hash contains an element or not more clearly expresses the intent of the developer, it is actually in the eyes of the beholder.
When I look at the original Java code snippet, the first thing that comes to my mind is - "oh, (s)he must be storing nulls as possible key values, so the check needs to be done upfront. But wait, (s)he then dereferences the value without checking for null? A bug, perhaps?.."
And when I look at the C++ snippet, I think that returning "false" rather than "true" is perhaps the common case, since the code is clearly optimized for it (it does more work than necessary in the "true" case, but that probably does not matter because we never follow that code path in reality).
So as you can see, the opinions on what's readable - explicit check/upfront initialization vs. checking for null/error case initialization - might diverge. For me doing the extra work makes code less readable - I expect that this was done for a reason, and would start searching for this reason. Another developer might like the more explicit but redundant code better.
Since we don't know who would be reading our code, and what his or her preferences might be, we should not trade ephemeral aesthetics vs. the very real watt-hours burned by the CPU executing suboptimal software.
The third reason for chosing the more efficient programming style is because it will make one a better developer. To reiterate the Hoare quote, albeit a bit out of context, "A good software developer will do this automatically, having developed a feel for where performance issues will cause problems."
The code above is worth noticing and correcting simply to make sure that the developer trains himself or herself to avoid writing the same code in the future (and perhaps, this time, in the critical path). After a correction or two, the engineer will start producing efficient implementations automatically.
I consider this point of style that separates the men from boys (and the women from girls, to be politically correct). If you look at code written by great developers, it is efficient without sacrificing readability, non-redundant without being obscure. It implements the algorithm - even in a pseudocode - in a way that makes it completely clear what author wants to do but does not sacrifice one clock of CPU time.
Take a look at the code in CLR (http://www.amazon.com/Introduction-Algorithms-Thomas-H-Cormen/dp/0262032937), Sedgewick (http://www.amazon.com/Bundle-Algorithms-C%2B%2B-Parts-1-5/dp/020172684X), or David Hanson's "C Interfaces and Implementations" (http://www.amazon.com/Interfaces-Implementations-Techniques-Addison-Wesley-Professional/dp/0201498413). Even when written in pseudocode, it is an amazingly effective pseudocode. It manages to achieve this effectiveness without sacrificing the readability.
Learn to do this, and you're going to be a great developer. Take up the religion where writing effective code by default is a "premature optimization" - and consign yourself to a career, as Joel puts it, of copying and pasting a whole bunch of Java code :-).
More on premature optimization here: http://www.acm.org/ubiquity/views/v7i24_fallacy.html.
Today's example (Java)...
Map map;
...
for ( ; ; ) {
...
if (!map.containsKey(key))
continue;
Object x = map.get(key);
...
}
This does the hash lookup twice - first when checking whether it contains the key, and second when retrieving the value. In the case where objects stored in the map are never null, the time spent in this code can be cut in two by just doing this:
Map map;
...
for ( ; ; ) {
...
Object x = map.get(key);
if (!x)
continue;
...
}
Is this a premature optimization?
What about this (C++):
bool process(string &in, string *out) {
*out = in;
string x, y;
if (!subprocess(in, &x, &y))
return false;
*out = x + '/' + y;
return true;
}
"*out = in;" being something on the order of several hundred of instructions (http://1-800-magic.blogspot.com/2008/04/stl-strings.html), the same code can be rewritten, without the loss of readability, as follows:
bool process(string &in, string *out) {
string x, y;
if (!subprocess(in, &x, &y)) {
*out = in;
return false;
}
*out = x + '/' + y;
return true;
}
Would this be a premature optimization as well?
The term premature optimization, originally applied by Knuth and Hoare to making design trade-offs to optimize to clock-level efficiency is most often misapplied to mean that how you write the code does not matter - we'll figure out what's slow and optimize it later.
There are a few problems with this approach. First, as Hoare writes himself...
"I've always thought this quote has all too often led software designers into serious mistakes because it has been applied to a different problem domain to what was intended. The full version of the quote is "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." and I agree with this. Its usually not worth spending a lot of time micro-optimizing code before its obvious where the performance bottlenecks are. But, conversely, when designing software at a system level, performance issues should always be considered from the beginning. A good software developer will do this automatically, having developed a feel for where performance issues will cause problems. An inexperienced developer will not bother, misguidedly believing that a bit of fine tuning at a later stage will fix any problems."
(Emphasis mine.)
There are certain systemic decisions that a designer makes before writing the code (do I use STL? Do I use Java?) that are extremely difficult to undo once the code is complete - the performance/memory/code size implications may end up spread across the entire system - perhaps across multiple layers of the software stack - and are extremely hard to localize and "fix".
What's worse, the "fix" usually is an even worse hack.
Consider a developer who applies Java to the wrong application domain (http://1-800-magic.blogspot.com/2007/11/domain-languages.html) - e. g. media processing - and ends up with a very slow system.
This developer might be compelled to rewrite parts of it in C++ and sprinkle the native code in various parts of the otherwise managed system where he or she has found one could gain the most performance improvement by profiling.
Of course this makes the design more complicated and the code less readable - now we have some parts of the system implemented in one language, and other seemingly random parts - in another. This flies right in the face of the originally stated goal of "cleanliness" of the design. It also makes debugging and correctness verification tasks much harder.
Second point I want to make is the implementation cleanliness. While one might argue that checking whether the hash contains an element or not more clearly expresses the intent of the developer, it is actually in the eyes of the beholder.
When I look at the original Java code snippet, the first thing that comes to my mind is - "oh, (s)he must be storing nulls as possible key values, so the check needs to be done upfront. But wait, (s)he then dereferences the value without checking for null? A bug, perhaps?.."
And when I look at the C++ snippet, I think that returning "false" rather than "true" is perhaps the common case, since the code is clearly optimized for it (it does more work than necessary in the "true" case, but that probably does not matter because we never follow that code path in reality).
So as you can see, the opinions on what's readable - explicit check/upfront initialization vs. checking for null/error case initialization - might diverge. For me doing the extra work makes code less readable - I expect that this was done for a reason, and would start searching for this reason. Another developer might like the more explicit but redundant code better.
Since we don't know who would be reading our code, and what his or her preferences might be, we should not trade ephemeral aesthetics vs. the very real watt-hours burned by the CPU executing suboptimal software.
The third reason for chosing the more efficient programming style is because it will make one a better developer. To reiterate the Hoare quote, albeit a bit out of context, "A good software developer will do this automatically, having developed a feel for where performance issues will cause problems."
The code above is worth noticing and correcting simply to make sure that the developer trains himself or herself to avoid writing the same code in the future (and perhaps, this time, in the critical path). After a correction or two, the engineer will start producing efficient implementations automatically.
I consider this point of style that separates the men from boys (and the women from girls, to be politically correct). If you look at code written by great developers, it is efficient without sacrificing readability, non-redundant without being obscure. It implements the algorithm - even in a pseudocode - in a way that makes it completely clear what author wants to do but does not sacrifice one clock of CPU time.
Take a look at the code in CLR (http://www.amazon.com/Introduction-Algorithms-Thomas-H-Cormen/dp/0262032937), Sedgewick (http://www.amazon.com/Bundle-Algorithms-C%2B%2B-Parts-1-5/dp/020172684X), or David Hanson's "C Interfaces and Implementations" (http://www.amazon.com/Interfaces-Implementations-Techniques-Addison-Wesley-Professional/dp/0201498413). Even when written in pseudocode, it is an amazingly effective pseudocode. It manages to achieve this effectiveness without sacrificing the readability.
Learn to do this, and you're going to be a great developer. Take up the religion where writing effective code by default is a "premature optimization" - and consign yourself to a career, as Joel puts it, of copying and pasting a whole bunch of Java code :-).
More on premature optimization here: http://www.acm.org/ubiquity/views/v7i24_fallacy.html.
Sunday, May 18, 2008
Democracy and deference
Mark Slouka traces a very interesting parallel between strongly hierarchical models of most business organizations, and what it means for the domestic politics.
The central thesis is that we're so used to unquestioningly follow orders at work, that we translate the same paradigm into politics, and it becomes just too easy to forget that a president is not a sovereign ruler, but rather a servant of the people. And the president is only too happy to behave like a king since the the population allows it.
"Turn on the TV to almost any program with an office in it, and you'll find a depressingly accurate representation of the "boss culture," a culture based on an a priori notion of-a devout belief in-inequality. The boss will scowl or humiliate you... because he can, because he's the boss. And you'll keep your mouth shut and look contrite, even if you've done nothing wrong... because, well, because he's the boss. Because he's above you. Because he makes more money than you. Because - admit it - he's more than you.
This is the paradigm - the relational model that shapes so much of our public life."
http://www.harpers.org/archive/2008/06/0082039
I think there's a lot of truth to this. I was once in the path of a senator at the Consumer Electronics Show in Las Vegas. His bodyguards were literally shoving people out of his way as his excellency was moving across the exhibition floor.
I could never imagine this was possible - even in Soviet Russia I did not ever see party officials (or rather, their guards) behave like that.
Whether this phenomenon is caused by the corporate culture there's no way of telling for sure, but I'd say it is probably not a bad guess. I've seen people change almost completely when they changed their managers, to an extent that their behavior was difficult to recognize.
The central thesis is that we're so used to unquestioningly follow orders at work, that we translate the same paradigm into politics, and it becomes just too easy to forget that a president is not a sovereign ruler, but rather a servant of the people. And the president is only too happy to behave like a king since the the population allows it.
"Turn on the TV to almost any program with an office in it, and you'll find a depressingly accurate representation of the "boss culture," a culture based on an a priori notion of-a devout belief in-inequality. The boss will scowl or humiliate you... because he can, because he's the boss. And you'll keep your mouth shut and look contrite, even if you've done nothing wrong... because, well, because he's the boss. Because he's above you. Because he makes more money than you. Because - admit it - he's more than you.
This is the paradigm - the relational model that shapes so much of our public life."
http://www.harpers.org/archive/2008/06/0082039
I think there's a lot of truth to this. I was once in the path of a senator at the Consumer Electronics Show in Las Vegas. His bodyguards were literally shoving people out of his way as his excellency was moving across the exhibition floor.
I could never imagine this was possible - even in Soviet Russia I did not ever see party officials (or rather, their guards) behave like that.
Whether this phenomenon is caused by the corporate culture there's no way of telling for sure, but I'd say it is probably not a bad guess. I've seen people change almost completely when they changed their managers, to an extent that their behavior was difficult to recognize.
Friday, May 16, 2008
Einstein's letter on religion goes for 170000 pounds
The guide price was 6000-8000 pounds. I was bidding on it, too, via an absentee bid. My puny offer of 9000 pounds was not even close :-(.
This was the only known written communication where Einstein expresses his views on religion directly.
My wife's reaction, which I do share was - "hope they didn't buy it to destroy".
"... The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this. These subtilised interpretations are highly manifold according to their nature and have almost nothing to do with the original text. For me the Jewish religion like all other religions is an incarnation of the most childish superstitions. And the Jewish people to whom I gladly belong and with whose mentality I have a deep affinity have no different quality for me than all other people. As far as my experience goes, they are also no better than other human groups, although they are protected from the worst cancers by a lack of power. Otherwise I cannot see anything 'chosen' about them.
In general I find it painful that you claim a privileged position and try to defend it by two walls of pride, an external one as a man and an internal one as a Jew. As a man you claim, so to speak, a dispensation from causality otherwise accepted, as a Jew the priviliege of monotheism. But a limited causality is no longer a causality at all, as our wonderful Spinoza recognized with all incision, probably as the first one. And the animistic interpretations of the religions of nature are in principle not annulled by monopolisation. With such walls we can only attain a certain self-deception, but our moral efforts are not furthered by them. On the contrary.
Now that I have quite openly stated our differences in intellectual convictions it is still clear to me that we are quite close to each other in essential things, ie in our evalutations of human behaviour. What separates us are only intellectual 'props' and `rationalisation' in Freud's language. Therefore I think that we would understand each other quite well if we talked about concrete things.
With friendly thanks and best wishes
Yours, A. Einstein."
http://www.relativitybook.com/resources/Einstein_religion.html
This was the only known written communication where Einstein expresses his views on religion directly.
My wife's reaction, which I do share was - "hope they didn't buy it to destroy".
"... The word God is for me nothing more than the expression and product of human weaknesses, the Bible a collection of honourable, but still primitive legends which are nevertheless pretty childish. No interpretation no matter how subtle can (for me) change this. These subtilised interpretations are highly manifold according to their nature and have almost nothing to do with the original text. For me the Jewish religion like all other religions is an incarnation of the most childish superstitions. And the Jewish people to whom I gladly belong and with whose mentality I have a deep affinity have no different quality for me than all other people. As far as my experience goes, they are also no better than other human groups, although they are protected from the worst cancers by a lack of power. Otherwise I cannot see anything 'chosen' about them.
In general I find it painful that you claim a privileged position and try to defend it by two walls of pride, an external one as a man and an internal one as a Jew. As a man you claim, so to speak, a dispensation from causality otherwise accepted, as a Jew the priviliege of monotheism. But a limited causality is no longer a causality at all, as our wonderful Spinoza recognized with all incision, probably as the first one. And the animistic interpretations of the religions of nature are in principle not annulled by monopolisation. With such walls we can only attain a certain self-deception, but our moral efforts are not furthered by them. On the contrary.
Now that I have quite openly stated our differences in intellectual convictions it is still clear to me that we are quite close to each other in essential things, ie in our evalutations of human behaviour. What separates us are only intellectual 'props' and `rationalisation' in Freud's language. Therefore I think that we would understand each other quite well if we talked about concrete things.
With friendly thanks and best wishes
Yours, A. Einstein."
http://www.relativitybook.com/resources/Einstein_religion.html
Thursday, May 15, 2008
Price/performance
A picture takes the space of ten thousand words, but is worth only a thousand.
(Based on a typical low-res jpeg size of ~60k, 5-6 letters per word, and a famous saying :-).)
(Based on a typical low-res jpeg size of ~60k, 5-6 letters per word, and a famous saying :-).)
Tuesday, May 13, 2008
Evil web advertising...
My computer was quite sluggish today. At first, I blamed it on Gmail - I am on dev cluster, which gets a new version daily, without any testing, so I hit various bugs in it - including the performance bugs - very often. But after I closed Gmail the sluggishness persisted, so I decided to investigate.
Take a look at the following two pictures. This one has CPU utilization at 10%.
And in this one, the CPU utilization has dropped to zero.
The only difference - the little "free smileys" advertizing craplet that has crept into the background. I had a few of them at some point, and the CPU was at constant 50%...
Take a look at the following two pictures. This one has CPU utilization at 10%.
And in this one, the CPU utilization has dropped to zero.
The only difference - the little "free smileys" advertizing craplet that has crept into the background. I had a few of them at some point, and the CPU was at constant 50%...
Does you XP box reboot incessantly after SP3 was installed?
The problem and the solution is here:
http://msinfluentials.com/blogs/jesper/archive/2008/05/08/does-your-amd-based-computer-boot-after-installing-xp-sp3.aspx
A bunch of my Media Center computers are AMD-based cheap motherboard+CPU combos from Fry's. I look forward to having lots of fun fighting this soon... not!
http://msinfluentials.com/blogs/jesper/archive/2008/05/08/does-your-amd-based-computer-boot-after-installing-xp-sp3.aspx
A bunch of my Media Center computers are AMD-based cheap motherboard+CPU combos from Fry's. I look forward to having lots of fun fighting this soon... not!
Wednesday, May 7, 2008
Code Reviews with a Smile
Google Seattle has a weekly event called "Gathering" - a team meeting for all engineers in the office where anyone can talk about anything of interest to Googlers. Topics range from overviews of various projects people are working on, to discussions of the new technologies, to presentations by various researchers from academia.
I took the last couple of slots to talk about the human aspects of the code reviews.
I was doing a lot of these lately. Being a readability reviewer for JavaScript guarantees you at least one code review per week, typically consisting of 3 or 4 iterations. Plus, I have a bunch of readabilities in other languages, and often volunteer to review code for teams who have nobody with readability (more on readability at Google here: http://1-800-magic.blogspot.com/2008/01/after-8-months-no-longer-noogler.html).
The talk turned out a success, and so I decided to blogify it :-).
When I was at Microsoft, I was no fan of mandatory code reviews. I thought it was impractical to have every line of code read by someone else, and I also was afraid that people would use code reviews as a substitute for testing (to an extent, this does happen at Google, which has much smaller test organization than Microsoft).
Well, Google does require code reviews for every check-in, so I had to convert - and I did. As it often happens, new coverts become zealots :-). So not only I ended up enthusiastically supporting the system, but joined Google Readability team as well.
Why did I convert? Basically, I looked at the code base, and was impressed. Microsoft does not have a unified style, and a lot of the code looks jagging to the eyes of a person who did not write it.
It's like an accent - small groups of people that communicate in relative separation from the main body of the language carriers develop it. In case of developers whose only communication is the compiler, a very strong "personal" accent develops, and the resulting code takes an effort to understand by an outsider.
The code reviews smooth the accent by expanding the group of people who speak the same code. Having a single style guide expands the accent to the entire company, so any part of the code looks like you wrote it yesterday. For me at least it improves the productivity dramatically.
Also, I noticed that the developers tend to write better code upfront if they expect that someone else will be looking at it - out of the sheer embarrassment :-).
But beyond the company benefits, there are two reasons why I personally love doing the code reviews.
First is because I learn a lot from them myself. Being exposed to code written by people in different corners of the company is the best way to keep abreast of the many projects you would otherwise have never heard about. Right now I am in the process of reviewing code that I am pretty sure I will use myself - and if not for the code review, I would never knew it existed.
Second is because it gives me an opportunity to teach. After more than 20 years of programming computers, there's a bunch of stuff I know, and recalling it during code reviews keeps it alive :-).
However, once we take the code out of the purely machine environment of compile-run-test-check in, it becomes a human communication. Human communications have the emotional aspect which is missing entirely from the computer-bound interaction of reviewless programming.
Thus code reviews add an entirely new dimension - above and beyond technical aspects, they are an instance of a human communication, and a very sensitive one at that - because in the process of a code review, one developer render opinion about the work of another engineer. Tread carefully!
Studies upon studies have shown that people use the emotions first, and the logic next. So the emotional rapport between a reviewer and the reviewee can have a bigger effect than the information that trades hands in the process. "How" can - and often does - become more important than "what".
I think about most human interactions as the bank transactions involving trust. You make a deposit into your account when you do something that the other person likes. You make a withdrawal when you have the other person do something that you like (but the other person might not).
In terms of code reviews, what is important for a reviewee? How could a reviewer increase his or her balance?
I think that the most important things for a reviewee is the latency - the sooner the review (or at least an iteration of the review) is done, the better.
While a code review is outstanding, a reviewee is often barred from continuing the work with the same set of files, because the newer changes could involve the same code that is not yet checked in pending the review.
If the latency is low, even big changes requested by the reviewer are easier to make - but if the reviewer sits on the code for two weeks, then requests a total rewrite, and then sits on it for two weeks again before declaring that the first version was better after all - it's entirely a different matter. The tension builds.
Another case where the stress tends to build up if when the reviewee works against a hard deadline. But the reviewer often does not live on the same shipping cycle, and is not taking the rush into account. This is good, of course - it helps ensure that the bad code does not get checked in purely to satisfy the timing constraints. But it is very important that the reviewer realizes that the pressure is building - and acts in ways that help relieve it.
What else is important for a reviewee? I'd say the style with which the reviewer acts, the civility of the communication. People generally do not tell their office mate - "Hey, you! Fetch me a chair!". But I've had many, many times seen the review comments say "Rename this variable to foo."
If you and your reviewer has worked together for a while, and built up considerable deposit in their trust account, it is OK to spend a little bit of it to conserve typing. But do not save on civility when reviewing the code for a stranger!
Here are my recommendations for the reviewer:
Now, let's look at it from the reviewer perspective. What does one want when he or she reviews someone else's code?
I personally like my efforts to be recognized. There is a reason I am doing this, right? I want my opinion to be respected :-).
As a reviewer, I want my time to be used effectively.
And, it goes without saying, I want the product's code base to be as good as possible.
What does this mean for a reviewee?
Have your own story of a code review that went badly because of lack of rapport between the people involved? Did I miss an important point? Write about it here!
I took the last couple of slots to talk about the human aspects of the code reviews.
I was doing a lot of these lately. Being a readability reviewer for JavaScript guarantees you at least one code review per week, typically consisting of 3 or 4 iterations. Plus, I have a bunch of readabilities in other languages, and often volunteer to review code for teams who have nobody with readability (more on readability at Google here: http://1-800-magic.blogspot.com/2008/01/after-8-months-no-longer-noogler.html).
The talk turned out a success, and so I decided to blogify it :-).
When I was at Microsoft, I was no fan of mandatory code reviews. I thought it was impractical to have every line of code read by someone else, and I also was afraid that people would use code reviews as a substitute for testing (to an extent, this does happen at Google, which has much smaller test organization than Microsoft).
Well, Google does require code reviews for every check-in, so I had to convert - and I did. As it often happens, new coverts become zealots :-). So not only I ended up enthusiastically supporting the system, but joined Google Readability team as well.
Why did I convert? Basically, I looked at the code base, and was impressed. Microsoft does not have a unified style, and a lot of the code looks jagging to the eyes of a person who did not write it.
It's like an accent - small groups of people that communicate in relative separation from the main body of the language carriers develop it. In case of developers whose only communication is the compiler, a very strong "personal" accent develops, and the resulting code takes an effort to understand by an outsider.
The code reviews smooth the accent by expanding the group of people who speak the same code. Having a single style guide expands the accent to the entire company, so any part of the code looks like you wrote it yesterday. For me at least it improves the productivity dramatically.
Also, I noticed that the developers tend to write better code upfront if they expect that someone else will be looking at it - out of the sheer embarrassment :-).
But beyond the company benefits, there are two reasons why I personally love doing the code reviews.
First is because I learn a lot from them myself. Being exposed to code written by people in different corners of the company is the best way to keep abreast of the many projects you would otherwise have never heard about. Right now I am in the process of reviewing code that I am pretty sure I will use myself - and if not for the code review, I would never knew it existed.
Second is because it gives me an opportunity to teach. After more than 20 years of programming computers, there's a bunch of stuff I know, and recalling it during code reviews keeps it alive :-).
However, once we take the code out of the purely machine environment of compile-run-test-check in, it becomes a human communication. Human communications have the emotional aspect which is missing entirely from the computer-bound interaction of reviewless programming.
Thus code reviews add an entirely new dimension - above and beyond technical aspects, they are an instance of a human communication, and a very sensitive one at that - because in the process of a code review, one developer render opinion about the work of another engineer. Tread carefully!
Studies upon studies have shown that people use the emotions first, and the logic next. So the emotional rapport between a reviewer and the reviewee can have a bigger effect than the information that trades hands in the process. "How" can - and often does - become more important than "what".
I think about most human interactions as the bank transactions involving trust. You make a deposit into your account when you do something that the other person likes. You make a withdrawal when you have the other person do something that you like (but the other person might not).
In terms of code reviews, what is important for a reviewee? How could a reviewer increase his or her balance?
I think that the most important things for a reviewee is the latency - the sooner the review (or at least an iteration of the review) is done, the better.
While a code review is outstanding, a reviewee is often barred from continuing the work with the same set of files, because the newer changes could involve the same code that is not yet checked in pending the review.
If the latency is low, even big changes requested by the reviewer are easier to make - but if the reviewer sits on the code for two weeks, then requests a total rewrite, and then sits on it for two weeks again before declaring that the first version was better after all - it's entirely a different matter. The tension builds.
Another case where the stress tends to build up if when the reviewee works against a hard deadline. But the reviewer often does not live on the same shipping cycle, and is not taking the rush into account. This is good, of course - it helps ensure that the bad code does not get checked in purely to satisfy the timing constraints. But it is very important that the reviewer realizes that the pressure is building - and acts in ways that help relieve it.
What else is important for a reviewee? I'd say the style with which the reviewer acts, the civility of the communication. People generally do not tell their office mate - "Hey, you! Fetch me a chair!". But I've had many, many times seen the review comments say "Rename this variable to foo."
If you and your reviewer has worked together for a while, and built up considerable deposit in their trust account, it is OK to spend a little bit of it to conserve typing. But do not save on civility when reviewing the code for a stranger!
Here are my recommendations for the reviewer:
- Respect the reviewee's time
- Maintain quick turnaround
- Do not ask for small tweaks that do not matter. Does this variable really need renaming? If it were your code, would you care to check out, change, and retest 10 files just to rename this function or not?
- Order not, suggest
- “Consider naming this foo because this is what it’s named everywhere else in the codebase.”
- “If you use bar instead, it could save you some code.”
- “Moving baz here could shave a few clocks from the execution path”
- Respect reviewee's opinion
- (S)he has spent days thinking about this. You have spent less than an hour...
- In case of conflicting approaches (AKA religious disagreements), it’s the reviewee who ultimately owns the code – (s)he should have the priority
- Praise! Nothing else creates good will more effectively than this.
- "Great CL, thank you for doing this!"
- Golden rule: review others’ code as you want your code to be reviewed
Now, let's look at it from the reviewer perspective. What does one want when he or she reviews someone else's code?
I personally like my efforts to be recognized. There is a reason I am doing this, right? I want my opinion to be respected :-).
As a reviewer, I want my time to be used effectively.
And, it goes without saying, I want the product's code base to be as good as possible.
What does this mean for a reviewee?
- If it’s not too hard, it is often easier to just do it
- If you find yourself pushing back on almost everything your reviewer suggest, one of you may be unreasonable
- If you do it often with different people, the unreasonable person is you…
- Respect reviewer's time
- Smaller change lists
- More comments, both in code and in change description
- Recognize stellar reviewers at the performance review time
- A note dropped to the person's manager endorsing a stellar reviewer will make his or her day
- And, of course, the Golden rule again!
Have your own story of a code review that went badly because of lack of rapport between the people involved? Did I miss an important point? Write about it here!
Vista: how to open the box
Windows Help and How-to page on opening the box Vista is shipping in:
http://windowshelp.microsoft.com/Windows/en-US/help/2e680b8d-211e-41c5-a0bf-9ccc6d7e62a21033.mspx%5C
A friend had sent this to me. I must confess, I fumbled with the box, too, for far longer that I should have. So the post link is in order.
But... who designed this thing?! And I mean... the whole thing :-(...
http://windowshelp.microsoft.com/Windows/en-US/help/2e680b8d-211e-41c5-a0bf-9ccc6d7e62a21033.mspx%5C
A friend had sent this to me. I must confess, I fumbled with the box, too, for far longer that I should have. So the post link is in order.
But... who designed this thing?! And I mean... the whole thing :-(...
Tuesday, May 6, 2008
The Nazis: A Warning from History
When Ken Burns' famous "The War" came out on DVD, we rented it just in time for the Spring Break so our daughters could watch it with us. But after doing the first two disks, we were too bored to continue.
I found "The War" to be very repetitive and shallow, and it suffered very much from the US-centric view of history, skipping almost entirely over anything that was going on in Europe until the landing of American troops in Italy (and then it focused on, well, you guessed right - American troops in Italy!). Which means that it missed about 70% of the conflict (http://1-800-magic.blogspot.com/2008/05/how-us-has-won-world-war-ii.html).
Being overhyped to the high heavens by the media did not help of course - it had set my expectations high, and the movie came way short...
"The Nazis: A Warning from History" which we rented a week ago, was only two disks to Burns' six, but I learned more from the first 15 minutes of it than from watching the first two DVDs of "The War".
The film covers the period from Weimar Republic to right before the fall of Berlin.
Like the War, it focuses on interviewing the eyewitnesses - but the people who participated in the European theater where majority of the action was happening.
Most of the interviewees were former Nazis, former soldiers, leaders of small Hitler-Jugend, just German citizens at the time. I think the focus of the movie was the banality of evil - that Hitler and his senior henchmen aside, the war would have been impossible without willing and sometimes enthusiastic cooperation of the "simple folks", although it does give a good insight on how the Nazi government operated on all levels.
In several cases, the movie research teams had unearthed documents that the interviewees probably would preffer to have never had existed - a letter denoucing the neighbour, court documents depicting someone's participation in the death squads, etc, and confronted the interviewees with them. Responses varied, but to fully appreciate the situation, you have to see the movie.
Highly recommended!
I found "The War" to be very repetitive and shallow, and it suffered very much from the US-centric view of history, skipping almost entirely over anything that was going on in Europe until the landing of American troops in Italy (and then it focused on, well, you guessed right - American troops in Italy!). Which means that it missed about 70% of the conflict (http://1-800-magic.blogspot.com/2008/05/how-us-has-won-world-war-ii.html).
Being overhyped to the high heavens by the media did not help of course - it had set my expectations high, and the movie came way short...
"The Nazis: A Warning from History" which we rented a week ago, was only two disks to Burns' six, but I learned more from the first 15 minutes of it than from watching the first two DVDs of "The War".
The film covers the period from Weimar Republic to right before the fall of Berlin.
Like the War, it focuses on interviewing the eyewitnesses - but the people who participated in the European theater where majority of the action was happening.
Most of the interviewees were former Nazis, former soldiers, leaders of small Hitler-Jugend, just German citizens at the time. I think the focus of the movie was the banality of evil - that Hitler and his senior henchmen aside, the war would have been impossible without willing and sometimes enthusiastic cooperation of the "simple folks", although it does give a good insight on how the Nazi government operated on all levels.
In several cases, the movie research teams had unearthed documents that the interviewees probably would preffer to have never had existed - a letter denoucing the neighbour, court documents depicting someone's participation in the death squads, etc, and confronted the interviewees with them. Responses varied, but to fully appreciate the situation, you have to see the movie.
Highly recommended!
Monday, May 5, 2008
YHOO offer withdrawn, but the stock price has not recovered
So, the spell of insanity must have passed over, and Microsoft has withdrawn Yahoo offer. This is a good thing for Microsoft - the combination would have never worked: http://1-800-magic.blogspot.com/2008/02/50b-down-drain-or-microsoft-bids-for.html.
The bad news, however, is that destruction of the shareholder value is not as easily reverted.
Look at today's stock quote. When the proposed deal was announced ('J' on the chart below), Microsoft has dropped more than $2 per share. Now that the deal was withdrawn, it has rebounded - but by a mere 40 cents...
The bad news, however, is that destruction of the shareholder value is not as easily reverted.
Look at today's stock quote. When the proposed deal was announced ('J' on the chart below), Microsoft has dropped more than $2 per share. Now that the deal was withdrawn, it has rebounded - but by a mere 40 cents...
Sunday, May 4, 2008
IT security as an impediment to developer's productivity
My first Computer Science teacher liked to tell this story. In the late 1940s there were 3 different classified projects to design the first computing platform in Russia. All three were run by the military, and as is typical for military designers in Soviet Union, all three were working in complete isolation from the rest of the world and each other. Paranoid times, you see: Stalin was imagining the new types of enemies of the state every day.
So one of these projects was falling behind quite a bit, and the leadership decided that it was hopeless, and declassified it. So the team could now go to the conferences, talk to other people, and engage in normal life of a research project, including a lot of information sharing.
This project suddenly had a turn-around, and produced the BESM line (http://en.wikipedia.org/wiki/BESM), which became the workhorse of Soviet computing for the next 40 years, sort of Warsaw Block IBM-360. The first version came out in 1952, the production of the last one stopped in 1987.
The other two projects stagnated and were eventually killed.
Moral of the story: secrecy is antithetical to research.
When I started working at Microsoft in 1998, I was startled to realize that the campus was not connected to the Internet at all. I had to go to the company library which had a few workstations on tap to buy tickets from Expedia. This was of course in the name of security, least one can steal the precious Windows source code.
Surprise, surprise, Microsoft struggled to win against Netscape, despite the fact that the company had far more resources to pour into the browser wars, had great programmers, and expertise in shipping.
Throughout the years I worked at Microsoft, security concerns of corporate IT were always in the way of me doing the work I was hired to do (and passionately wanted to do).
First, the IT always messed with the VPN access to the corporate network. It started just like any other VPN - you click on the icon, enter your password, and in a few seconds you are connected. This was not "secure enough", so they added a step that checks if the computer originating the VPN session has critical updates installed. Then they expanded on this brilliant thought by adding more and more checks. Then they started to require smart cards for access.
As a result, towards the end of my tenure, I had to wait at least a minute to connect when I wanted to work from home, usually more like a minute and a half.
It is kinda obvious that a company should really appreciate if people want to work for it more than standard business hours, and should make doing so as easy as possible. Google gets it by the way - connecting to work is easy (under 5 seconds in most cases), everyone gets a laptop, and they buy you a big monitor for work use at home. Microsoft doesn't.
Today, Microsoft have all but removed the VPN access to its corporate network, and replaced it with remote desktop proxy that allows people to connect directly to PCs at work through the RDP sessions. I always laugh at my wife as I observe her doing it - there are 3 dialog boxes where she has to enter her password and various PINs, and the process takes minutes...
And of course if one cared, it would still be easy to write a virus that would penetrate RDP connection if the client is infected - all it needs to do is detect when the desktop is idle for a long time (so the user must have gone away), send a keystroke emulating Ctrl press every 5 minutes so the server desktop does not lock, and then inject keystrokes into RDP client queue to have the server run something off the internet. Mission accomplished!..
Inside Microsoft, the IT security interferes with the developer's productivity as well. If you are an office worker, you probably don't notice it, because your only computer is fully managed by IT, and there's not much you're doing with it anyway.
But if you're a developer, have multiple test boxes (or devices), then the security is in the way big time. Your off-domain test hardware can't connect to anything. You can't debug it in a lot of cases, and you have to jump through a lot of hoops before you do when you can.
A guy who worked for me in my previous job went as far as removing his computer from Microsoft's domain - so much IT security was interfering with his abilities to get stuff done. He read email through the web interface.
Nobody has a way to really measure it, but my gut feeling is that Microsoft loses probably 5-10% of the productivity to the security monster. If you estimate that there are probably ~20000 people in Microsoft's test, dev, and PM orgs, that would be between a thousand and two thousand people. A size of a whole division.
And I am fairly sure Microsoft is not even close to being the worst company as far as IT impact on developers goes. I've heard about companies where developers don't have admin rights to their machines, so they can't install any software beyond that installed by IT. How scary is this - the developers being trusted with the future of their company, but not with their own computers...
The big problem with IT security is that people who make decisions of how to implement it in an enterprise are usually not engineers themselves, do not really understand the risks. And they are not the business people, either, so they do not understand the tradeoffs. They are hired to prevent, and they do prevent. And the best way to prevent code from being leaked, is to make sure that no code is written, so there's nothing to leak :-).
So one of these projects was falling behind quite a bit, and the leadership decided that it was hopeless, and declassified it. So the team could now go to the conferences, talk to other people, and engage in normal life of a research project, including a lot of information sharing.
This project suddenly had a turn-around, and produced the BESM line (http://en.wikipedia.org/wiki/BESM), which became the workhorse of Soviet computing for the next 40 years, sort of Warsaw Block IBM-360. The first version came out in 1952, the production of the last one stopped in 1987.
The other two projects stagnated and were eventually killed.
Moral of the story: secrecy is antithetical to research.
When I started working at Microsoft in 1998, I was startled to realize that the campus was not connected to the Internet at all. I had to go to the company library which had a few workstations on tap to buy tickets from Expedia. This was of course in the name of security, least one can steal the precious Windows source code.
Surprise, surprise, Microsoft struggled to win against Netscape, despite the fact that the company had far more resources to pour into the browser wars, had great programmers, and expertise in shipping.
Throughout the years I worked at Microsoft, security concerns of corporate IT were always in the way of me doing the work I was hired to do (and passionately wanted to do).
First, the IT always messed with the VPN access to the corporate network. It started just like any other VPN - you click on the icon, enter your password, and in a few seconds you are connected. This was not "secure enough", so they added a step that checks if the computer originating the VPN session has critical updates installed. Then they expanded on this brilliant thought by adding more and more checks. Then they started to require smart cards for access.
As a result, towards the end of my tenure, I had to wait at least a minute to connect when I wanted to work from home, usually more like a minute and a half.
It is kinda obvious that a company should really appreciate if people want to work for it more than standard business hours, and should make doing so as easy as possible. Google gets it by the way - connecting to work is easy (under 5 seconds in most cases), everyone gets a laptop, and they buy you a big monitor for work use at home. Microsoft doesn't.
Today, Microsoft have all but removed the VPN access to its corporate network, and replaced it with remote desktop proxy that allows people to connect directly to PCs at work through the RDP sessions. I always laugh at my wife as I observe her doing it - there are 3 dialog boxes where she has to enter her password and various PINs, and the process takes minutes...
And of course if one cared, it would still be easy to write a virus that would penetrate RDP connection if the client is infected - all it needs to do is detect when the desktop is idle for a long time (so the user must have gone away), send a keystroke emulating Ctrl press every 5 minutes so the server desktop does not lock, and then inject keystrokes into RDP client queue to have the server run something off the internet. Mission accomplished!..
Inside Microsoft, the IT security interferes with the developer's productivity as well. If you are an office worker, you probably don't notice it, because your only computer is fully managed by IT, and there's not much you're doing with it anyway.
But if you're a developer, have multiple test boxes (or devices), then the security is in the way big time. Your off-domain test hardware can't connect to anything. You can't debug it in a lot of cases, and you have to jump through a lot of hoops before you do when you can.
A guy who worked for me in my previous job went as far as removing his computer from Microsoft's domain - so much IT security was interfering with his abilities to get stuff done. He read email through the web interface.
Nobody has a way to really measure it, but my gut feeling is that Microsoft loses probably 5-10% of the productivity to the security monster. If you estimate that there are probably ~20000 people in Microsoft's test, dev, and PM orgs, that would be between a thousand and two thousand people. A size of a whole division.
And I am fairly sure Microsoft is not even close to being the worst company as far as IT impact on developers goes. I've heard about companies where developers don't have admin rights to their machines, so they can't install any software beyond that installed by IT. How scary is this - the developers being trusted with the future of their company, but not with their own computers...
The big problem with IT security is that people who make decisions of how to implement it in an enterprise are usually not engineers themselves, do not really understand the risks. And they are not the business people, either, so they do not understand the tradeoffs. They are hired to prevent, and they do prevent. And the best way to prevent code from being leaked, is to make sure that no code is written, so there's nothing to leak :-).
Saturday, May 3, 2008
How US has won the World War II
The undisputed common knowledge in the US is that IT has won World War II, more or less by itself. The other countries are really mentioned, except as victims.
I keep reading stuff like this over and over again: "We have to do to them what the Americans did to the Nazis. Kill all their leaders. Kill all the collaborators. Then, we'll find those willing to make peace." http://www.macleans.ca/world/global/article.jsp?content=20080423_11237_11237&page=3 (This particular quote comes from US's 51st state - Israel, but it perfectly mirrors popular opinion in the "mainland").
Here are some numbers though (from http://en.wikipedia.org/wiki/World_War_II_casualties).
80% of German military deaths were on the Eastern Front.
In military deaths, US is behind Yugoslavia, Japan, China, Germany, and Soviet Union, and just barely above UK.
In total deaths, US is behind United Kingdom, Italy, France, Hungary, Romania, French Indo-China, Yugoslavia, India, Japan, Indonesia, Poland, Germany, China, and Soviet Union, and just above Lithuania and Czechoslovakia.
So in all actuality, US involvement in WWII was far, far, far, less than most countries in Europe. And US contribution to winning the war was closer to that of France and UK, and far, far, far, far behind Soviet Union, where Germans lost 80% of their army and where it was broken in Stalingrad (http://en.wikipedia.org/wiki/Battle_of_Stalingrad) and Kursk (http://en.wikipedia.org/wiki/Battle_of_Kursk) a full year before Allies landed in Normandy.
Now, obviously, self-aggrandising is not an American phenomenon. Every country practices a healthy dose of it.
What is super dangerous in this reading of history ("U.S. has won World War II") is that it creates an idea among American population that wars are cheap, and easy to win. Look, we won in the worst war ever to hit human civilization, and most people barely noticed. (Yes, as a percentage of population, US lost... 0.32%. That's one out of three hundred. As compared to Soviet Union's 13.71% - one out of six.)
Hence, the Rambo mentality.
Hence, Iraq.
But it could have been worse, much worse.
In mid-80s there were two movies that came out in Soviet Union and United States, both dealing with the world after the nuclear catastrophe.
The Russian movie was "Dead Man's Letters" (http://en.wikipedia.org/wiki/Dead_Man's_Letters), and depicted the world that was dead.
The American movie was "The Day After" (http://en.wikipedia.org/wiki/The_Day_After), and had shown farmers removing the thin layer of soil that was irradiated, and preparing for the new crop. The message was - the life continues. We can win.
The Soviet Union is now history (look, America has won again! Just like it did with the Nazies!) but the myth of the military power on the cheap lives on - Hillary is now ready to obliterate Iran (soon to be nuclear power), and compared to the morons that are running the show now, she's the sane one.
It's funny, but the reality with Iran is probably going to turn out quite differently - by switching oil trading from dollar to euro it is they who are more likely to obliterate the US, not the other way around.
Meanwhile, the military hardware is being built. No health insurance though, we can't afford that...
5/5/2008:
In many countries such nationalism arises from a pent-up frustration over having to accept an entirely Western, or American, narrative of world history—one in which they are miscast or remain bit players. Russians have long chafed over the manner in which Western countries remember World War II. The American narrative is one in which the United States and Britain heroically defeat the forces of fascism. The Normandy landings are the climactic highpoint of the war—the beginning of the end. The Russians point out, however, that in fact the entire Western front was a sideshow. Three quarters of all German forces were engaged on the Eastern front fighting Russian troops, and Germany suffered 70 percent of its casualties there. The Eastern front involved more land combat than all other theaters of World War II put together.
http://www.newsweek.com/id/135380/page/5
I keep reading stuff like this over and over again: "We have to do to them what the Americans did to the Nazis. Kill all their leaders. Kill all the collaborators. Then, we'll find those willing to make peace." http://www.macleans.ca/world/global/article.jsp?content=20080423_11237_11237&page=3 (This particular quote comes from US's 51st state - Israel, but it perfectly mirrors popular opinion in the "mainland").
Here are some numbers though (from http://en.wikipedia.org/wiki/World_War_II_casualties).
Country Military casulaties Total casualties
Soviet Union 10,700,000 23,100,000
Germany 5,533,000 7,293,000
United States 416,800 418,500
80% of German military deaths were on the Eastern Front.
In military deaths, US is behind Yugoslavia, Japan, China, Germany, and Soviet Union, and just barely above UK.
In total deaths, US is behind United Kingdom, Italy, France, Hungary, Romania, French Indo-China, Yugoslavia, India, Japan, Indonesia, Poland, Germany, China, and Soviet Union, and just above Lithuania and Czechoslovakia.
So in all actuality, US involvement in WWII was far, far, far, less than most countries in Europe. And US contribution to winning the war was closer to that of France and UK, and far, far, far, far behind Soviet Union, where Germans lost 80% of their army and where it was broken in Stalingrad (http://en.wikipedia.org/wiki/Battle_of_Stalingrad) and Kursk (http://en.wikipedia.org/wiki/Battle_of_Kursk) a full year before Allies landed in Normandy.
Now, obviously, self-aggrandising is not an American phenomenon. Every country practices a healthy dose of it.
What is super dangerous in this reading of history ("U.S. has won World War II") is that it creates an idea among American population that wars are cheap, and easy to win. Look, we won in the worst war ever to hit human civilization, and most people barely noticed. (Yes, as a percentage of population, US lost... 0.32%. That's one out of three hundred. As compared to Soviet Union's 13.71% - one out of six.)
Hence, the Rambo mentality.
Hence, Iraq.
But it could have been worse, much worse.
In mid-80s there were two movies that came out in Soviet Union and United States, both dealing with the world after the nuclear catastrophe.
The Russian movie was "Dead Man's Letters" (http://en.wikipedia.org/wiki/Dead_Man's_Letters), and depicted the world that was dead.
The American movie was "The Day After" (http://en.wikipedia.org/wiki/The_Day_After), and had shown farmers removing the thin layer of soil that was irradiated, and preparing for the new crop. The message was - the life continues. We can win.
The Soviet Union is now history (look, America has won again! Just like it did with the Nazies!) but the myth of the military power on the cheap lives on - Hillary is now ready to obliterate Iran (soon to be nuclear power), and compared to the morons that are running the show now, she's the sane one.
It's funny, but the reality with Iran is probably going to turn out quite differently - by switching oil trading from dollar to euro it is they who are more likely to obliterate the US, not the other way around.
Meanwhile, the military hardware is being built. No health insurance though, we can't afford that...
5/5/2008:
In many countries such nationalism arises from a pent-up frustration over having to accept an entirely Western, or American, narrative of world history—one in which they are miscast or remain bit players. Russians have long chafed over the manner in which Western countries remember World War II. The American narrative is one in which the United States and Britain heroically defeat the forces of fascism. The Normandy landings are the climactic highpoint of the war—the beginning of the end. The Russians point out, however, that in fact the entire Western front was a sideshow. Three quarters of all German forces were engaged on the Eastern front fighting Russian troops, and Germany suffered 70 percent of its casualties there. The Eastern front involved more land combat than all other theaters of World War II put together.
http://www.newsweek.com/id/135380/page/5
Thursday, May 1, 2008
Subscribe to:
Posts (Atom)