Wednesday, December 19, 2007

Unix vs. Windows

Since I've done two Unix-hating posts, in the interest of being "fair and balanced", I thought I should talk about weaker sides of Windows as well.

(Parenthetically, I did not actually diss Unix per se. I was writing about the quality of its dev tools. It is hard to argue with the fact that of all software companies in the world Microsoft has spent by far the most effort to support its developers - and reaped the biggest reward in process of doing so.)

What I don't like about Windows is the API, or, rather, the meta-API - the general principles that guide the developers who create Windows APIs.

A short poll for Windows developers - how many of you remember all of the arguments to CreateFile? CreateWindowEx? Same Windows developers - how many of you remember arguments to fopen? Big difference, eh?

(Parenthetically, this is how Unix developers live without Intellisense - by having functions that take fewer arguments :-)).

But you probably still do use the same arguments to CreateFile that you would use with fopen - a file name, and an access mode. And the access mode implicitly dictates how the file is being treated if it does not exist.

The difference is that with Windows the API tries to cram every possible parameter into the function definition. With Unix, the APIs by and large provide for most frequently used case, and if you want to do something non-standard, you can do it by using something else.

For example, if once in a blue moon you want to fail on open for write if the file does not exist - the same thing can be accomplished by just checking if the file exists upfront - there's really no need to cram every possible piece of functionality into one function!

The problem is not just getting programmers to learn to type CreateEvent(NULL, FALSE, FALSE, NULL) when they are trying to get an event. It's that all these NULLs are being pushed on the stack - megabytes of 'push 0' in any Windows app.

Here's a very simple program "in Windows" and what compiler generates for it:

int wmain(int argc, WCHAR **argv) {
HANDLE h = CreateFile(argv[1], GENERIC_WRITE, 0, NULL,
00401000 mov eax,dword ptr [esp+8]
00401004 mov ecx,dword ptr [eax+4]
00401007 push esi
00401008 push 0
0040100A push 0
0040100C push 2
0040100E push 0
00401010 push 0
00401012 push 40000000h
00401017 push ecx
00401018 call dword ptr [__imp__CreateFileW@28 (402004h)]
WriteFile(h, "Hello, world", sizeof("Hello, world"),
0040101E push 0
00401020 push 0
00401022 push 0Dh
00401024 mov esi,eax
00401026 push offset string "Hello, world" (4020F4h)
0040102B push esi
0040102C call dword ptr [__imp__WriteFile@20 (402000h)]
00401032 push esi
00401033 call dword ptr [__imp__CloseHandle@4 (402008h)]
return 0;
00401039 xor eax,eax
0040103B pop esi
0040103C ret

And the same program "in Unix":

int main(int argc, char **argv) {
FILE *fp = fopen(argv[1], "w");
00401000 mov eax,dword ptr [esp+8]
00401004 mov ecx,dword ptr [eax+4]
00401007 push esi
00401008 push offset string "w" (4020F4h)
0040100D push ecx
0040100E call dword ptr [__imp__fopen (4020A0h)]
00401014 mov esi,eax
fputs("Hello, world", fp);
00401016 push esi
00401017 push offset string "Hello, world" (4020F8h)
0040101C call dword ptr [__imp__fputs (4020A8h)]
00401022 push esi
00401023 call dword ptr [__imp__fclose (40209Ch)]
00401029 add esp,14h
return 0;
0040102C xor eax,eax
0040102E pop esi
0040102F ret

The Windows executable code is 60 bytes, and the Unix is 46 - full 25% less. But the memory is free, right? No, it's not, you pay for it in performance.

What else is bad about Windows? Kernel mode! It's a huge blob of millions of lines of code, all interdependent, all running in the same address space, with every component having a potential of trashing the state of another.

That's where drivers live, too - to make sure that even if Microsoft only hires the absolute wizards, there will always be a bunch of device drivers writers - written by contractors hired by hardware companies - to coredump you computer for you.

On top of it, you get KD to debug it.

I used to work in NT base for a year (I ran away because of terrible tools, I just couldn't stand KD after 6 years of using Windows CE kernel debugger, which is exactly like Visual Studio, except it can step in and out of the operating system), and I observed NT devs for all of my 9 years with Microsoft.

The only way to make any progress working with any kernel component in Windows is to know everybody else who works in kernel mode, because an average debug session involves having 2-3 people from across the stack - one level above you, one level below, and one to the side (security) - sitting together at your computer trying to figure out who corrupted the pool today.

Or alternatively you can send around the remote sessions (remote kernel debugging skills is something that Windows NT team perfected to the black belt level).

What else? I think that the recent metadata-based approach that Microsoft champions (where the code is not enough to describe what application is doing, one has to supply a bunch of metadata describing security level, signing, component dependencies, versions of dependencies, etc) is way too complicated.

Compare, for example, a learning curve for regular Windows UI programming model (which I think was really well designed) with Avalon. The former requires maybe a day to get right, and maybe a week to become really proficient. The later I have no idea, because after trying to grasp it for a couple of days, I gave up.

Parenthetically, how does it all compare with X? Read about it here :-).)


Илья Казначеев said...

You're not supposed to use raw libX11 to draw UIs.

That would be equivalent to use raw NT kernel calls to write Windows programs.

You understand, libX11 should implement X11 protocol. If you want it to be more abstract, you'll either have to bloat libX11, which is conceptually wrong since everyone would use raw calls to program their own toolkits anyway, or bloat X11 protocol, which would be terrible wrong.

The kind of xlib you really want is Xaw, but it's not really supposed to be used now :)

As for the rest of article, it was interesting, mirroring my own experience.

Anonymous said...

Compare, for example, a learning curve for regular Windows UI programming model (which I think was really well designed) with Avalon. The former requires maybe a day to get right, and maybe a week to become really proficient. The later I have no idea, because after trying to grasp it for a couple of days, I gave up.

Maybe you are just tired of computer programming? May that's why old good technologies is better for you?
I've spend plenty of time to learn f*ng GDI. It's a technology makin you thinking about your UI from the viewpoint of technical restrictions. "You do not supposed to use multiply composition because f*ng core will loose control of object handlers". "You are not able to redefine view of a control because its behaviour is strict bound to existing one." And so one.
But I've just read Sells book on WPF to be full-armed to make Avalon UIs of any complexity.

DzembuGaijin said...

NT kernel is a mess. While KD seems to be adequate for the job it is a very low level tool and you do need to be a black wizard to fully utilize it. You do need to understand what Windows does on assembler level to KD it. But I saw some people very good at it, so it can be done, I guess. The biggest issue with NT is interdependence. You *REALLY* have to know what is going on in *ALL* Window guts even if you just write a non-trivial mini-filter. That is much harder to do so. You may read a lot of code ( if you work for THE company) and try to understand it, but it is a very difficult task and as Sergey said you may need help of few people from different components to get job done. People outside may not have this luxury , so, writing drivers is very hard and if not BSOD, they will cripple you system most likely in one way or another.

As for high level API... whatever ;-) Most luck any logic, so, they come and go. The practical approach: cook books and samples. Like it or not, but it may be easy to get the idea and adapt it to you needs. That sounds bad for people who want to know it ALL really well and "become really proficient" but is it worth it if you will only work with it say for 1-2 years or less? There is just no good choice and our brain seems to be excellent patern recognition machine up to this task. It is just not practical to really learn big complex frameworks that can be very different in concept to what you know and used too. Good Hot-to book is my choice and if you will have to stick with technology you will ultimately master it in a few years anyway :-) and if not, who cares ?

Chris Gray said...

:) as I sit here and stare at kd I just have to smile ;)