Monday, September 11, 2017

Hot war, cold war, cyber war

"I know not with what weapons World War III
will be fought, but World War IV will
be fought with sticks and stones."

Albert Einstein

With the relations between Russia and US having deteriorated to the point where both countries are evicting each others diplomatic missions, one might ask, what's next?

The hot war between the two nuclear powers is probably - hopefully - impossible.

Too much risk for conventional weapons exchange degenerating into nuclear exchange.

The cold war is something that US can pursue - but not Russia, which has very few, if any, allies.

There is another option left to Russia - a cyberwar.

We already had a preview of what's possible during Presidential elections of 2016, where just news manipulation on social networks helped bring Trump to power, dealing an enormous damage to US government.

Internet is incredibly important part of the fabric of the modern society, and abusing it can have a devastating effect.

For example, tampering with the stock market data can lead to market collapse, and if done carefully, can result in a long term negative economic impact by turning a bull market into a bear one.

Even something as innocuous as traffic data can create chaos around major cities and non-trivial losses of productivity, not to mention shutting down major infrastructural pieces such as Google Search or Exchange Online.

And we did not even begin discussing banking, credit cards, social security numbers. If you think, like I do, that the breach of Equifax was bad enough, imagine simultaneous compromise of all credit rating agencies, banks, and credit card companies.

Society can cope with breaches of security in a single organization, but what if all or most credit cards become dysfunctional at once? The economy will grind to a halt in hours, not days, and it will take weeks to repair the damage - by which the impact on earnings, stock prices, and commerce will be very material.

What if all large credit ratings start spewing random numbers in place of real data? The lending would seize, and we know exactly what this means for the economy.

If Russia were to embark on a cyberwar against the US, it would have options that are unavailable to a regular hacker, options to which our cyberdefenses are ill prepared.


With many billions of dollars available, a national government has time and resources necessary to execute a cyberattack on a devastatingly large scale.

It can destroy many infrastructural systems - Google, Microsoft, Facebook, banks, credit rating agencies, market makers - at once, laying low until a necessary coverage of penetrated organizations is achieved.

It takes a long time to recover from a cyberattack, but it will take a nonlinearly large amount of time to do so in case of multiple of these covering the entire industries, because of availability of qualified personnel and the adverse network effects and interdependencies between multiple different installations.

On the other hand, the attackers enjoy the positive network effect when they compromise more and more organizations.

For example, the entire supply chain can be attacked - even if your own organization has implemented all the security controls and is "secure", what about the other code that you run - the OS, the patches, the device drivers, other software?

Are you as confident in the release process at the Chinese company that produced the chipset driver for your servers as you are in your own release process? Exploiting them may open the door to exploiting you.


A national government has - compared to individual hackers and criminal organizations - an enhanced power to coerce employees into subversive action on its behalf.

It can appeal to sense of patriotism, but it can also threaten an action against one's family ("put this USB keychain in a server, or your grandmother dies"). Obviously, extended families of people from former Soviet Union countries are the easiest to threaten, but this does not necessarily apply to only Russians - executing a hit inside the United States is probably well within the capabilities of Putin's secret services, to say nothing of China, India, and other countries.

With more than half of developers having international roots in many teams, the susceptibility to coercion via blackmail is a real threat.

With FISMA/FedRAMP stipulation of no malicious activity in production without collusion, organizations that are compliant with these standards are somewhat - but not necessarily fully - protected.

For example, if this requirement is "solved" by "separation of duties", meaning that a developer cannot access the production, and the operator does not know how to compromise the product, the organization is engaging in security theater, not real security. The developer can slip the exploit into the source code, and an operator can slip the exploit directly into production.

But even if this requirement is implemented really well, what about collusion? Who is to say that Russian government cannot compromise multiple people in the organization?


Prosecution of computer crimes relies on cooperation of international law enforcement authorities. A government player can stymie investigation by denying access to people investigating the breach.

In case of a major military power, this can be further enhanced by application of said military power - for example, an apartment complex in Syria or Ukraine used in perpetrating a cyberattack can be "accidentally" destroyed by bombing.

Can US retaliate?

Of course, and it's reasonable to expect that it will. However, the reliance on the Internet is much weaker in Russian society than it is in the Western world, and reliance on the resources beyond Russian borders is much weaker. So the impact on the Russian society will not be nearly as strong as the impact of Russian attack on the US.

So where do we go from here? I don't know if there is a good path, but the organization with major online assets should first and foremost plan for an attack from a national government rather than individual hackers.

US government would also do well to take another look at FISMA and FedRAMP frameworks and strengthen the requirements for cyberdefenses. And it should mandate the compliance to a broader set of companies - those whose infrastructure is essential for functioning of American society, rather than the governments and government contractors.

Private organizations should be asking themselves questions about their susceptibility to human and supply chain compromise.

Are you confident in all software that runs on your desktops and in the datacenters?

Where did this build of Python/Perl/Node.JS come from? What about the OS? Compilers? Device drivers? Tools and utilities used by the devs? (FAR, anyone?) Who compiled them? Was there a security review? If there was, were people qualified and thorough, and able to defeat really complex exploits that may have originated from the cyberwar industry's best minds?

Is your source code as well secured as your operations? Are code reviews mandatory? How many people need to sign off on a code review before the change is committed? Is there a possibility of collusion? Do you add a random set of people who have to review the code in addition to the normal owners of the code? Are these people, again, qualified to spot non-trivial exploits slipped in by a sophisticated enemy?

What about the source control itself? Who has access? Can people with admin access make unreviewed check-ins? The system that tracks code reviews - does it have a group of developers or operators who can make untraced check-ins?

What about the build lab? Are there people with administrative access to binaries who can slip in a virus? The release share - who has admin access to these servers?

In operations, do you use an authentication scheme vulnerable to pass the hash attacks using tools like Mimikatz, or are user names/passwords generated per access? Can people log on to a server in production without a witness? Is only one witness required, and can they collude?

Is access to your production system based on a trust provider? Who is in charge of this trust provider? Is there a developer or ops team which can enable access for themselves in an untraceable way?

Do you have a break-glass scenario for access to production? Can it be used to obtain untraced access?

How about the hardware ecosystems? Who has access to the network? Can they make changes in an untraceable way? Who has the access to core routers/PDUs secured? Do vendors have access? Are passwords rotated from the factory defaults? Are you confident in the hardware infrastructure vendors process for releasing the firmware and tools that are used to diagnose the hardware?

What tracing/logging/security monitoring are you using? Is it on- or off-box? How hard is it to interfere with trace collection/security monitoring? Can a person who designed it fool the system? What about the team who owns security monitoring - can they access production without a trace?

And once the attack is ongoing - does your organization have a mechanism to stop it? Is it possible to clear the assets held by the attacker? Or is reimaging the production - and thus losing your customer's data (and most likely then, the customers) - the only option?

This is just a small subset of questions that a competent security review should examine. Do you have answers for these in your organization? The answer is almost certainly "no", and though this may be OK when confronting a small-budget hacker or a criminal organization, the system is almost certainly fall when attacked by adversary with a multibillion dollar cyberwar budget.

We live in interesting times...

Tuesday, January 3, 2017

Would you like to work on one of the world's biggest distributed storage platforms?

“What are the important problems of your field? What important problems are you working on? If what you are doing is not important, and if you don't think it is going to lead to something important, why are you working on it?”
   - Richard Hamming

We are a team in Azure Compute responsible for utilizing exabytes of storage in Microsoft datacenters. Our software aggregates unused disk space and makes it available to customers both as block storage - we have created Petabyte-sized disks in the past - as well as relational storage available through a SQL interface.

Our technology is unique because it works at every level of software stack - from the deepest level of kernel storage drivers, to massively replicated, PAXOS-based system that coordinates storage allocation.

Our team subscribes to all modern paradigms of software development - we roll out to production every week, we code-review every change, we design high-scale, loosely coupled systems with fail-safe mechanisms built in, we do copious amount of production debugging, testing and monitoring, and we collect the data before writing code. And we have been doing it for the last five years.

We are looking for senior and principal engineers who can help us take the system to the next level: going from single exabytes to tens and eventually hundreds of exabytes of storage. You need to have a non-trivial, multi-release, deep experience (at least five years) with either massively distributed systems, or operating systems kernel and driver development. Experience with the Windows Storage stack is preferred, but not required. You should be a master coder in C++ (on a scale from one to ten where Stroustrup is eight, we expect you to be no less than five). You should be able to think on your feet, produce simple solutions, implement them quickly and efficiently, and guide them to success in production deployments.

Why should you work on our team?

Our technology is one of the top two or three most advanced systems in its field on the planet. If you love storage and distributed systems, this is the system to work on. If you have expertise in operating systems development, and would like to expand into distributed systems, or vice versa, this is a great opportunity to capitalize on your existing expertise while learning the other universe.

We have the resources of a huge, powerful company behind it, but none of the bureaucratic overhead that is often associated with it. We are at the tip of tens of billions of dollars the company is investing in software services in general, and Azure in particular.

You will work with brilliant people on a project that directly impacts thousands of developers, and indirectly impacts hundreds of millions of customers. You will learn new things, and share your knowledge with us.

Interested? Send your resume to and we will be happy to talk more!

Thursday, June 2, 2016

LPT: Backing up your Windows 8+ computer

If you ever tried to backup a Windows machine with a more or less recent system image (as in Windows 8+), you may have noticed that there is no longer a UI for doing full system backup.

You can backup files (actually, file libraries - things like My Documents, Desktop, etc) - but if you have something in c:\src - you are now out of luck. Evidently, someone in Windows division decided that less is more.

To the PM who thought that it was acceptable to ship an OS without system backup in 2016 - screw you! I hope you get zero rewards next review period.

Luckily, there is still a way to do full backup. In an elevated PowerShell prompt, type this:

wbadmin start backup -backupTarget:D: -include:C: -allCritical -quiet

Here D: is a USB drive on which the backup will be stored, and C: is the system drive, if you have more than one volume, list them separated by commas, like so: "-include:C:,E:,F:".

Obviously, the backup drive should not be backed up.

You can even make a scheduled task of it, making sure that you machine is backed up every few hours.

LPT: Reset-ComputerMachinePassword

Have you ever restored a domain-joined machine only to discover that it is no longer connected to your domain?

Windows machines have an account in Active Directory with the name MACHINENAME$ (where MACHINENAME is obviously the name of your computer), and a randomly-generated password. This password is created when machine is joined to the domain, and then rotated every 30 days automatically.

This last part - automatic rotation - means that if you are restoring a machine from a backup (or a VM snapshot), and the backup (snapshot) is older than 30 days, the machine will no longer be able to connect to the domain, because it will have rotated the password, and the old backup has the old one.

In the past I would always disjoin and then rejoin the machine to the domain. This requires two reboots, obviously, so it is quite a time consuming action. Just recently though I was moving a virtual machine from one hypervisor to the other, and since the box was really, really big, it took a long time. The migration failed, but I booted the semi-broken machine anyway. This was a mistake - the box was broken, AND it must have been near the machine password expiration, so despite being broken, it went ahead and changed the password.

Now the original VM, while still functioning, was no longer on the domain. It was an Exchange server.

Since a lot of Exchange configuration data lives in Active Directory, I did not want to find out what will happen when I take it out of domain and rejoin it. Instead, I decided to look around for a way to reconnect the machine to the domain.

Guess what, there is actually an really easy way. PowerShell 3.0 (included with 2012 or above, a Windows Update on 2008 R2) contains this handy command: Reset-ComputerMachinePassword, which does exactly what you think it should from its name - it resets the machine password in active directory, and reconnects the box to AD.

From an elevated PowerShell 3.0 or above:

Reset-ComputerMachinePassword -Credential "DOMAIN\Administrator"

You get prompted for the admin user password (it doesn't need to be a domain admin, just a user name which has the ownership of this machine's account), and voila! Just in case, I rebooted my server, and it was back online.

Note that the command is present in PowerShell 2.0 but - alas! - it does not contain the -Credential flag, which makes it useless in this particular scenario. So you really have to upgrade the PowerShell on 2008.

Sunday, September 8, 2013

Acer Iconia W3, 64GB Windows 8 Pro tablet for $250

Just got this device at Office Depot.

The price was $299, and I got $50 off for opening their credit account. I have to say, for the price, the device is AMAZING. It has a true tablet form factor - 8.1" screen, and not very much frame around the screen.

When I first saw it at the store with the price tag, I thought it was RT, but then I looked at the CPU and it was Intel ATOM, so... it turned out to be the real thing!

Up to now, I did not own a tablet. I don't like severely restricted operating environment (no accessible file system, restricted applications, etc), and this device lifts all these restrictions.

It is still primarily a tablet. You should not buy it as a "lightweight" laptop replacement - it is just not powerful enough for that.

But it's a tablet that can run Office, VideoLAN, and pretty much every utility application written for Windows. It does not restrict your file system access. It can connect to SMB file server in the way normal Windows can - in other words, it is a tablet that does not have all the stupid, artificial restrictions of a typical tablet.

And, of course, it is still a tablet. It runs forever on a battery charge. I don't know if it will last the full advertised 8 hours, but I walked around with it for 2 or 3, and the battery indicator has barely dropped. It has a touch screen, and the screen itself, while not as fancy as on Surface, is very readable.

It is light (MUCH lighter than Surface or iPad), it does not heat at all, and the 64GB version, after all is said and done (and Office installed), leaves you with about 30GB of free space and an expansion microSD card for more. The UI is snappy, and CPU power is enough to play every video that I tried.

And it's just $250. Amazing!

Wednesday, July 17, 2013

An observation for fault tolerant systems

One morning during our vacation stay in California I have awoken to an unpleasant surprise: a front tire went flat overnight. A trivial event, in theory, as the car is fault tolerant when it comes to tires: it has a full size spare.

Unfortunately, the spare was cold: even though the right way to rotate tires is by including the spare in the rotation cycle, in practice I failed to do this, so the spare was simply hanging in its suspension system for several years, untouched and untested.

When I installed it, the very first thing that I discovered was that the pressure in the tire was very low. On this particular car (Toyota Sienna) the spare is well hidden, and it is very easy to forget to check its pressure, so I did. It was not completely flat, but it was not drivable. Luckily, hotel was close to a gas station with an air compressor, but if a tire were to burst far away from the civilization, driving with this spare long distance would have been slow, painful, and unsafe.

Second problem that arose from incorrect rotation schedule was that while the rest of the tires were well worn, this one was completely new. Which means that it was appreciably larger in diameter, making the car asymmetric, and it was on the front wheel. A tire shop could have swapped one of the rear wheels for the flat front, and have the spare installed at the rear, where it would have been less critical, but doing this on a hotel parking lot with one jack was out of question.

By the time the problem was resolved, it took me a good part of the day waiting for Costco to replace four tires instead of enjoying a bike ride across the Golden Gate Bridge with the rest of my family.

So what does all this have to do with the design of fault tolerant systems?

Basically, if the system relies on a cold spare – a replacement part that is squared away, but is not part of the day to day operation, there is a good chance that the spare won’t work – and you will find that out at the worst possible moment, exactly when you need to use it.

Defective spares are not the only source of problems during failure recovery. The recovery process itself is subject to bugs and operator errors. Usually code paths that are activated during recovery are not exercised daily, and can and often do contain bugs that are not ferreted out during regular testing.

If the repair process involves an operator, things can get even worse: an operator also does not execute failure recovery process often enough to be familiar with it, and the probability of a fat-fingered action skyrockets. I personally once lost a whole RAID array at home by replacing a working, instead of the failed, disk.

Most fault tolerance models presume that the failures are independent, and the probability distribution of the second failure is the same as the probability distribution of the first. In practice it is usually not true.

In a fault tolerant system, the mean time to second failure is shorter than the mean time to the first failure.

Since failure recovery adds new code paths and new processes, it is impossible to achieve complete independence of the primary and secondary faults. So… what to do?

A typical reaction would be to ensure that testing failure code paths happens regularly. For instance, the example with the car above had a reasonably simple process-based solution: the pressure should be tested before every long trip. Likewise, testing a master service failure could (and should) be a part of acceptance tests before the release to production.

A better way to handle the situation, however, is by a more careful design.

If at all possible, prefer the design where there is only one role, and if one machine fails, the rest just get higher QPS. This should be the default for services that do not require state preservation, like most frontends. In this case the divergent code path is simply absent, and the code that tests whether to eject a failed system from the query path is always active.

This is not always an option, however. Most backend systems require state persistence, and implement a variation of Paxos, Zookeeper, or simple master-slave protocol where there is a defined leader and one or more followers.

Here a failure triggers a complex leader re-election protocol, and the new leader may exercise different hardware components, which may have already failed, but because the follower did not use them, it is not discovered by the time the election happens.

If the system has distinct roles for primary and secondaries, the simplest way to ensure that all machines can execute all roles is to have it rotate the roles during the normal course of operations. This way a premature switch away from a failed master would be likely to be as uneventful as a routine switch that would have happened just half an hour later.

The leader election protocol would be tested not just a few times in the lab and once in production, but exhaustively, many times under all conditions that arise in real life.

In conclusion: choose a car with a full spare, rotate your tires periodically, and have the spare participate in rotation schedule.

Saturday, January 26, 2013

A big tent of crazy

Today I ran into a page that was talking about what Obama's reelection reveals about America. Which was, of course, the standard "people reject Republican agenda" idea. Nothing new, really, and mostly true, but boring.

What made me chuckle, however, was that while discussing why Obama won, they forgot the reason number zero - that his opponent was a truly terrible candidate. There are probably a few hundred people in the US who could relate to Romney, and they are probably split equally between the two ruling parties. But a few hundred people are not enough to win the vote.

Come to think about it, Romney could have been a Democratic candidate for President just as well - he would have had a similar amount of "base" - which is to say, almost none. He just happened to be a Republican because... well, because that job was available. If he were a Democrat, he would need to switch fewer positions than he had to become a primary-worthy Republican.

The truth is, Obama was a terrible candidate. He promised change, but delivered more of the same. Government secrecy - worse than under Bush. Military budget - bigger than under Bush. Whistleblower prosecutions - more than under anyone ( Income inequality - bigger than under Bush. DOJ pursues a guy who stole a few academic papers, yet lets the banksters who stole billions off the hook.

For a liberal, Obama was uninspiring with a capital U. I could not bring myself to vote for him, and ended up writing in Jon Stewart/Stephen Colbert in his place. Yet not for a second did I doubt that Obama would win. Because even though Obama was uninspiring for the left, Romney was an order of magnitude more uninspiring - for everyone.

I mean, we just came out of a financial train wreck of epic proportions, and here we go, a candidate that runs for a President is a poster child for the forces that created the wreck. What can you expect?

So how did Republicans get a candidate like this?

Because between elections Republicans maintain a big tent. A big tent of crazy. They have people who believe that the Earth is 6000 years old and that the Creation "Museum" is in fact a... museum. They have people who say that evolution and the Big Bang are "lies from the pit of hell" - and probably even more people who do not know what Big Bang is. They have people who think that Obama is a foreign-born Muslim whose Harvard transcripts bear the mark of the Beast. They have people who pray for rain, but don't listen to scientists. They have people who feel safer with the military bigger than the next 10 combined, than with the health insurance. They have people who believe in death committees, want to keep the government out of their Medicare, and want abortion providers to be executed.

After years of gerrymandering, the big crazy camp keeps electing the worst idiots ever in Congress with clockwork predictability. However, once every four years some one has to step out of this asylum to run for the national office - and this is where the process gets confused. People who can win Republican vote have no chance of winning Presidency, and a person who can win Presidency has no chance of winning Republican hearts and minds.

So what to do? So every four years Republican strategists have to source a "more normal" person to appeal to the whole country. After the previous four years in the crazy camp, they only have a very fuzzy idea of what that might mean. Clearly the passion is out - last time a Republican said anything that his party passionately believed in on the national arena this did not end well. Clearly the guy must be rich (because "American Dream", and also because elections are an expensive business). Clearly his personal beliefs must be very flexible - the guy should have a "normal" track record, then win the Republican primary, and then become "normal" again.

And so, ladies and gentlemen, I give you... Mitt Romney. The rest, as they say, is history.

Saturday, December 15, 2012

Hobbit, or About One Third of the Way There

"For sale: a copy of Titanic. First tape watched once. Second tape never watched."
-- MicroNews ad, circa 1998

If you, like me, have read The Hobbit six or seven times during your childhood, you probably remember being supremely bored with unnecessary details, longing to cut through all the useless crap as fast as possible and see how it all ended.

Well, my fellow Americans - I address you, as the largest and the most merchandisable movie market in the world you are most certainly the primary target for Hobbit, The Movie - your long wait for a better Hobbit has come to an end!

The movie version has a number of improvements on the book.

Less talk, more action

The best feature of the movie is that it cuts down on tedium of the original.

The long and, frankly, unnecessary scene of one-by one introduction of dwarves in the very beginning - cut. After the first couple the rest of the them just roll in - literally - as a group. All nine of the rest - Thorin now comes alone after Gandalf (and most of the dinner).

Many days and many miles of boring trudge to Rivendell? Gone, gone, and gone! And good riddance! In the new version the company races there with orcs and wargs on their heels - much faster and, frankly, more action, with Radagast riding the bunny rabbits against orcs and Gandalf opening a passage in the stone just as everything is about to be lost, and Elvish cavalry riding in to mop orcs up!

Long wonderings inside Misty Mountains - greatly streamlined, and a lot more action added.

In the book the cavalcade of dwarves goes back and forth, and back and forth, through the dark tunnels, with and without the hobbit. And then the hobbit making his separate journey in the dark - again, back and forth, back and forth. While I imagine this was OK for the beginning of the 20th Century, 100 year later the book feels slow and quaint. The movie is faster and more action packed.

In the movie, the dwarves leave Rivendell without Gandalf. They run into the laps of fighting stone giants - which kick each other for good five minutes while the company is trying to hold for dear life to the body of one of them. Eventually, they make it into a cave where everyone goes to sleep. The hobbit wakes up and decides to turn back home. Then, after a conversation with a dwarf, he changes his mind. Then the floor opens and they all drop down and get hauled to the Great Goblin. The fall through the floor is just like in Ice Age - it's great to see Peter Jackson taking advantage of so much artistic progress that film-making had made since the book was written!

The tunnels themselves are gone. When the company escapes from Great Goblin's hall, they run through a large system of wooden scaffoldings propped up in the middle of a huge cavern that seem to take all of the space inside the mountain. The scaffoldings disintegrate as they run through them, orcs fly around, and eventually they slide down to the very bottom of the cavern on a segment of the scaffolding - think Ice Age again.

When the going gets tough, the tough get going

With all the boring parts gone and some serious action added, the movie needed real fighters to make all the action possible. It handled it masterfully by giving the personages a significant overhaul.

In the book, Bilbo was a reluctant hero. He was a hero, yes, but he only performed heroic acts when every other possibility was exhausted.

In the movie, Bilbo is a willing participant. A fighter. A freedom fighter, almost.

He leaves Hobbiton on his own volition, rather than being kicked out by Gandalf. He is the hero of the fight with the trolls - and a fight it is, unlike the book's affair of subterfuge and distraction. He gives a speech worthy of a great leader about his connection with his own home and how he wants to help dwarves find theirs.

Finally, he literally saves Thorin's life in the battle of "Out of the frying pan, into the fire". Yes, in the movie, it's a battle. You wouldn't expect real heroes to just sit in the tree, would you?

I only have to hope that this trend continues and in the end Bilbo will have strangled Smaug with his bare hands.

Just like Bilbo the dwarves have gotten a major face lift. While reading the book I could not help being annoyed how dour the dwarves were. Other than Thorin, Balin, and (maybe) Bombur (through his mass) they did not really have distinct personalities, mostly playing as a crowd. They couldn't fight, they weren't very witty, and they were spending their parts trudging along and muttering under the breath. It always puzzled me how they even expected to fight the dragon in the first place.

Well, not in this movie! From the moment they show up (looking like a motorcycle gang) to the several battle scenes where they make short work of orcs and administer some serious punishment to the trolls, they are fighters. Warriors. While success of the mission may not be 100% assured, there is no doubt that Smaug will have to fight hard to defend the stolen treasure.

New plot!

As I pointed out above, the movie omitted a lot of unnecessary parts of the book. Unfortunately, here the interests of the viewers were in direct contradiction with the interest of the business. The audience of course has benefited from less crap, but the business plan clearly called for three installments.

Having three movies rather than one means four times the revenue, because the interest in the previous one heats up again right before the release of the next installment. The sales of DVDs, action figures, various movie-related novelty items all go up.

(By the way, the idea of writing a book version of Hobbit The Movie is exciting. I am very much looking forward to it!)

The other problem with the book is that it does not flow naturally into the plot of Lord of the Rings. The link to Sauron is very vague, the nature of the Ring unclear, Gandalf has considerably less wizard power, etc. The reader is left with a lot of questions - and you CERTAINLY don't want to have viewers ask questions after they've seen the movie. People come to theaters to have fun, not work through complicated plots!

Furthermore, a lot of important personages from Lord of the Rings are absent from the Hobbit. This means that a great number of people from the original cast - who, I am sure, have become good personal friends to the director during their long work on the epic trilogy - would not be involved in the new project. With the royalty revenues from the trilogy coming down, and the price on real estate in Hollywood going up, this is a bigger problem than you might think.

Well, maybe all this would have been an insurmountable challenge for lesser men, but Peter Jackson is truly a brilliant director, and he has proven beyond any reasonable doubt that he is worth every penny of the millions and millions and millions of dollars that he hill have made from this movie.

He did what lesser men would not have guts to do - he radically modified the plot.

The solution is easy to see if you understand the root cause of all the consistency problems with the book. You see, in the past people wrote prequels BEFORE they wrote the successful work. The Hobbit was actually written PRIOR to The Lord of the Rings trilogy.

There is a lot of problems with this approach - the prequel might place certain limits on what could later be exploited. For instance what if Star Wars III were to be shot before Star Wars IV, and let's say Obi Wan Kenobi would have killed Anakin Skywalker. What then?

But what we know now Tolkien did not know 100 years ago, and that's why we have what we have - two literary works that look like they were written during different time periods and for different purposes.

Luckily, Peter Jackson's masterful work on the new plot for The Hobbit has fixed all these problems in one fell swoop. It extended the plot giving enough footage for three 3-hour long extremely entertaining, action-packed installments. It created roles - and therefore, jobs! - for the actors that would not otherwise be there. And it made the plot consistent with The Lord of the Rings.

Some plot modifications were small, but extremely cute - for instance, Radagast reviving his favorite hedgehog. Or Radagast riding a sled propelled by bunny rabbits pursued by a band of orcs.

Some were more fundamental, but short - Galadriel, Saruman, and Elrond in a council explaining the connection between The Hobbit's Smaug and the rise of Sauron.

Perhaps the biggest addition to the plot was the revival of Azog - which according to Tolkien was killed in Moria by Dain (Thorin's second cousin). In the movie he is back from the dead, and in hot pursuit of Thorin and his company.

In conclusion

If you liked the hand-to-hand combat between Gandalf and Saruman in The Lord of the Rings, you will enjoy every minute of Hobbit The Movie. And... let me know what happens in the next installment, because I don't think I will be going. I've seen enough :-).

Friday, December 14, 2012

Python is the best programming language? Really?

Apparently, it is, according to LinuxJournal readers

Don't misunderstand me, I LOVE Pyton. It is a great scripting language. In fact, if *nix shell programming languages never existed, and Python would be the default - and the ONLY - shell programming language - on ALL OSes, including Windows - the world would have been a greener place.

But best PROGRAMMING language? Really? Above C and all its derivatives?

Friday, November 23, 2012

Migrating from SBS 2003 to Server 2008 and Exchange 2010

Small Business Server 2003 was the best thing that happened to my home computing infrastructure in the past two decades. I installed it immediately after the release, and has enjoyed simple, manageable domain and email solution up until now.

I never upgraded to the newer versions of it though - because one of the most important features for me - the ability to use the server as a gateway - was dropped from subsequent releases (because Server 2008 no longer supported NAT). I liked the programmability of the routing built into Server 2003, and I've built a number of security monitors and integrations with the home security system myself.

Eventually though all good things must come to an end, and so it was the time  to upgrade to newer software. I wanted the programmability of Exchange Web Services that were not available in 2003, newer anti-spam products, and closing of the support window for 2003 software is just around the corner.

I decided to go for plain Jane installation of Server 2008 and Exchange 2010 - one generation behind, yes, but detailed instructions for upgrade from SBS 2003 were available for that software, and also once SBS is out of the picture, migration to the newer versions of separate components is easier.

Clear instructions were quite hard to discover, so I decided to put together this list of pointers for people who would attempt to do it after me.

First, this is THE guide:

It is detailed, ALMOST error-free, and it is awesome in every regard. Big - HUGE - thanks for Glen Demazter for putting it together!

There are a few quirks that need to be pointed out in addition.

First, use Administrator account for installation, not just a user who is a member of Domain Admins group. This is because Administrator has rights to AD schema, which Domain Admins group does not. If you don't, the Step 3 will fail.

Second, domain controller and mail server should both have static IP addresses. In Step 5 (DHCP) allocate at least 16 addresses at the lower end of the space to static IP range, select IPs from that range, and configure them to be static in the network adapters of the respective servers.

Then after Server 2008 was DCPROMO'ed, go to DNS control panel (Admin Tools) and create entries for them in forward and reverse lookup zones of your local domain.

In Step 6, the write-up assumes that you use a router. I don't, I use SBS 2003 as a gateway. So instead of redirecting the ports on the gateway, you would use Administrative Tools -> Routing and Remote Access -> SERVERNAME -> IP Routing -> NAT/Basic Firewall -> double-click on Network Connection (or whatever your public network interface is called) -> Services and Ports.

You will need to redirect ports 25 and 443 at a minimum to your mail server. Most likely you would want to have it double as your web site, so you might as well redirect port 80 as well.

When this is done, you need to go to your EXTERNAL DNS server (typically this would be at your domain's registrar) and make DNS record for the external names - (or and to point to the server.

You COULD create an SRV record, but regular record is fine, too. As it happens, if you already have a wildcard domain entry, it should work as well, as anything going over HTTPS (autodiscover traffic does!) will end up on your server, and that's what you need.

In Step 7, there are two companies that make reasonably priced certificates - GoDaddy ($90 per year) and StartSSL ($60/2 years of Class 2 cert).

I chose StartSSL because their package includes unlimited number of certificate - under their business model they charge you $60 for verifying your personal information (you email them photos of your passport, the driver's license, and phone bill), and then you can issue yourself any number of certificates - wildcard, UCC, whatever you want - against the domains that you own.

Once a certificate is imported, Step 7 misses a very important step - the services need to be transferred to the newly imported certificate from Exchange's self-created cert. This can be done in Exchange Management Console -> Server Configuration -> Select certificate, then click Assign Services To Certificate from the left Action pane.

Once this is done, go to and test your connection. This appears to be Microsoft's web site, but I would nevertheless use a specially created, low-power user account to test this out, and then delete the account.

StartSSL certs, albeit being very cheap, have a quirk which in the end took me a lot of pain to resolve. They allow putting ONLY the domain and its derivatives into the certificate. For instance, you can have,, and all be in one certificate. However - and this is very, VERY annoying - the computers inside the network do not use public computers to connect, they connect by their local name, which is something like MYMAILSERVER.solyanik.local, rather than

Since MYMAILSERVER.solyanik.local cannot be put into startssl cert, the internal outlook clients complain twice on every restart (reconnection to server, really) about server (MYMAILSERVER) having wrong cert (

This is fixable.

To do so, you need to first create an internal authoritative domain for in your DNS server (on your domain controller, Administrative Tools -> DNS -> Forward Lookup Zones -> New Zone -> Primary Zone), and then create entries for autodiscover, www, mail, etc in this zone. Use the local IP addresses for these entries. This will become authoritative for inside of your network (and, obviously, ONLY for your internal network, as this DNS zone would not synchronize upstream).

Then follow the instructions in this KB to fix the internal pointers to the mailserver and the autodiscover:

This makes the certificate warnings from internal Outlook clients disappear.

Step 8 - data migration from older Exchange - does not work as described. You will get an exception error when you try to migrate the mailboxes.

To fix this, on SBS 2003 go to Exchange System Manager -> Administrative Groups -> First Administrative Group -> Servers -> SBS2003SERVERNAME -> First Storage Group -> Mailbox Store (double click) -> Security and grant full access to the machine account of the new Exchange 2010 server (you will need to select the option that includes computer account in the account picker, by default it only includes users and groups and will balk when you ask it to resolve machine account). Machine account has the same name as the computer.

Second, when you migrate the public folders, it won't work either. The fix is described here:

In my case the AD object did not have 443 in it, so the only thing that I needed to do was to remove the SSL requirement as described in the first part of the post above:
1. In the properties of the virtual root Exadmin in IIS, go to the “Directory Security” tab.
2. In the “Secure Communications” section select “Edit”.
3. Make sure to deselect “Require secure channel (SSL)” and “Require 128-bit encryption.”
4. If the “Require 128-bit encryption.” is selected and greyed out, make sure to select “Require secure channel (SSL)” and deselect “Require 128-bit encryption.” then deselect “Require secure channel (SSL)” again.

I do not use either Sharepoint or SBS's user shares at home, so I have not tried instructions in Steps 8 and 10.

I did, however, get Windows Phone 7 to connect to the new instance of Exchange. This was highly non-trivial, and this was what needed to be done.

First, go to and clicking on "import our CA certificate" and install the certificate on the phone.

Second, for Administrator accounts, on domain controller, go to Active Directory Users and Computers -> DOMAINNAME.local -> MyBusiness -> Users -> SBSUsers, and, immediately before connecting the user, open user's properties -> security -> advanced -> click "Include inheritable permissions from this object's parent", then OK out of the dialog.

Now delete the existing account on your phone (yes, this is painful, I know), and re-create it. Your people tiles for Exchange contacts will of course be gone...

At the very end, when the SBS server is demoted and removed from the network, Exchange Management Console will start complaining about not being able to access Active Directory. Close it, remove this file: "c:\users\AppData\Roaming\Microsoft\MMC\Exchange Management Console" and reopen it.

Finally, the send connector that was created as part of Exchange Migration worked erratically for me. Some emails would sit in the queue forever, then get rejected. The exchange queue viewer would show messages sitting in outgoing queue with the error "A matching connector cannot be found to route the external recipient".

To fix this, do the following:

  • Open Exchange Management Console
  • Go to Organization configuration -> Hub Transport -> Send connectors.
  • There will be SBS connector; delete it.
  • Right click -> New Send Connector
  • Name it something (SMTP) and pick Custom (default) for intended use, then Next
  • On the Address space tab, click Add, set address to *, everything else leave as default. Next.
  • On the Network Settings tab, click Use external DNS checkbox.
  • Then click through to the end of the dialog which will create a new Send connector

You are now done. Thank you for using Microsoft software!

Monday, July 30, 2012

Algebra not necessary?

NYT published the opinion today.

TL;DR: Math is hard - let's go shopping!

I wanted to drop the author a piece of email to point out the obvious: today we have technology that is capable of destroying the population many times over. This technology is in the hands of the politicians, who are representatives of the population.

Understanding the impact of this technology requires command of science - both the facts (no, Earth is not 10,000 years old, and no, Jesus is not going to come raining radioactive waste on infidels long before we boil the planet in greenhouse gas emissions) - as well as scientific apparata (what scientific "theory" means, how they are built, what is applicability of it, and how they evolve over time, why the fact that evolution or climate change are "theories" does not mean that one day we won't find that they are incorrect, and the Bible is in fact literally true, etc).

Understanding scientific method require mathematics, and, yes, the very basic of it is algebra.

Then I clicked on his home page.

Wednesday, June 27, 2012

Autopilot is hiring!

My team works on one of the world's biggest software infrastructure projects - we run datacenters that power Bing. We are responsible for the system that automatically provisions hardware, sets up the network, distributes software and data to serving and processing nodes, and monitors servers, applications, and hardware devices.

We are looking for brilliant engineers who are interested in hardware, networking, and distributed systems.

Tuesday, March 6, 2012

The company which makes TSA full-body scanners is called Rapiscan

I am not making this up. Here is the proof:

Wednesday, February 22, 2012

Wednesday, February 1, 2012

Noteworthy: privacy and your favorite online service development team

There is always a conflict between security and agility in development of web services. And the golden mean does not seem to a point of equilibrium, the companies tend to swing all the way one or the other way.

Thursday, January 5, 2012

On June 30 we get to sleep 1 second more...

...and the clock will count to 2012-06-30 23:59:60 to resynchronize the clock to Earth's rotation.



OBSERVATOIRE DE PARIS                                   
61, Av. de l'Observatoire 75014 PARIS (France)
Tel.      : 33 (0) 1 40 51 22 26
FAX       : 33 (0) 1 40 51 22 91
e-mail    :

                                              Paris, 5 January 2012

                                              Bulletin C 43

                                              To authorities responsible 
                                              for the measurement and 
                                              distribution of time

                                   UTC TIME STEP
                            on the 1st of July 2012

 A positive leap second will be introduced at the end of June 2012.
 The sequence of dates of the UTC second markers will be:  
                          2012 June 30,     23h 59m 59s
                          2012 June 30,     23h 59m 60s
                          2012 July  1,      0h  0m  0s
 The difference between UTC and the International Atomic Time TAI is:

  from 2009 January 1, 0h UTC, to 2012 July 1  0h UTC  : UTC-TAI = - 34s
  from 2012 July 1,    0h UTC, until further notice    : UTC-TAI = - 35s 
 Leap seconds can be introduced in UTC at the end of the months of December 
 or June, depending on the evolution of UT1-TAI. Bulletin C is mailed every 
 six months, either to announce a time step in UTC or to confirm that there 
 will be no time step at the next possible date. 

                                              Daniel GAMBIS
                                              Earth Orientation Center of IERS
                                              Observatoire de Paris, France




It pains me to say this, but on this term Google's relevance is higher...

Wednesday, December 28, 2011

Fix for Samsung Focus beeping issue

For the last couple of days my Windows Phone have been very sick. It would not go to sleep when the power button was pressed. And when I unmuted the sound, I discovered that it was beeping incessantly.

This was happening only when the device was on battery power. When I plugged it in, it behaved correctly. At first I assumed that it has something to do with the battery. Rebooting the device and removing/reinserting the battery did not help, however.

The sound that it was making though was the same as the one for connecting the power. So I looked closely at the USB power connector in the phone, and discovered that the signal carrier was bent and was touching the walls of the adapter. I straightened it with a tiny screwdriver, and the problem disappeared.

North Korea has its own - official - reddit

The comments on it are absolutely hilarious.

Tuesday, December 20, 2011

Nice try, Amazon!

I am sure this offer is entirely fair :-).

Saturday, December 17, 2011

Thursday, December 15, 2011

National Defense Authorization Act for Fiscal Year 2012 (it looks like this time of the year again)

"Göring: Why, of course, the people don't want war. Why would some poor slob on a farm want to risk his life in a war when the best that he can get out of it is to come back to his farm in one piece. Naturally, the common people don't want war; neither in Russia nor in England nor in America, nor for that matter in Germany. That is understood. But, after all, it is the leaders of the country who determine the policy and it is always a simple matter to drag the people along, whether it is a democracy or a fascist dictatorship or a Parliament or a Communist dictatorship.

Gilbert: There is one difference. In a democracy, the people have some say in the matter through their elected representatives, and in the United States only Congress can declare wars.

Göring: Oh, that is all well and good, but, voice or no voice, the people can always be brought to the bidding of the leaders. That is easy. All you have to do is tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country.

In an interview with Gilbert in Göring's jail cell during the Nuremberg War Crimes Trials (18 April 1946)"

Why you should leave your vote for President (and most likely, Senator) blank next year

No matter what the Republican alternative is, the Democrats MUST be held accountable for this.

"Virtually all Democrats and Republicans voted to strip citizens of their rights in a vote of 93-7."

Here is the roll call.

The following people voted Nay:
    Sanders, VT
    Lee, UT
    Wyden, OR
    Merkley, OR
    Coburn, OK
    Paul, KY
    Harkin, IA

Both WA senators voted for it. None of them will get my vote next election season, no matter what the alternative is.

Tuesday, December 13, 2011

Generating random numbers with normal distribution

Question: Given a standard generator with uniform distribution, generate a normally distributed sequence of random numbers.
Answer: Box-Muller transform!

Source snippet:

class NormalRandom
        private bool haveNextRandom = false;
        private  double nextRandom = 0;
        private  Random rnd = new Random();

        /// <summary>
        /// Implements random number generator with normal distribution
        /// based on the polar form of Box-Muller transform.
        /// </summary>
        /// <returns>A random number with normal distribution.</returns>
        public double NextDouble()
            if (haveNextRandom)
                haveNextRandom = false;
                return nextRandom;

            double x1, x2, w;
                x1 = 2.0 * rnd.NextDouble() - 1.0;
                x2 = 2.0 * rnd.NextDouble() - 1.0;
                w = x1 * x1 + x2 * x2;
            } while (w >= 1.0);

            w = Math.Sqrt((-2.0 * Math.Log(w)) / w);
            nextRandom = x2 * w;
            haveNextRandom = true;

           return x1 * w;

...and here are the results:

Monday, December 12, 2011

Graphics in console application

A long, long time ago when programming for Windows you had to make a choice - your application would have to be either console, or GUI, but not both. If you liked your app for the console subsystem, you could not create windows or dialog boxes, and the application did not have a message loop. If you were a GUI app, you could only use console if you created it yourself, and your app could not inherit its parent's console.

At some point down the line this got fixed, so a console application today can have UI elements - for example, it can call MessageBox(). Despite the weirdness - I am sure HCI purists/Apple would never approve of it - it can actually come quite handy. I, for one, quite often find myself in need of a graphic in the middle of a simple application (sometimes to just visualize something as part of a debug code path) - which I don't want to convert to fully-fledged GUI.

Unfortunately, there is precious little information on how to do do mixed mode console/GUI programming on the Internet, so I figured I'd fill the void :-).

First add references to System.Windows.Forms and System.Drawing (only if you are going to be drawing of course) to your app, as well as the corresponding "using"s.

Then you can create a dialog box that derives from Form:

    using System.Windows.Forms;
    class MyForm : Form

...and display it as follows:

    MyForm f = new MyForm();

ShowDialog function is blocking - your console thread will not get control until user closes the window. Of course, standard console functions all work, you can print to the screen like so:

    class MyForm : Form
        protected override void OnPaint(PaintEventArgs e)
            Console.WriteLine("Paint called!");

As an example, here is a very simple application that allows a user to plot simple functions from Math library:

// Copyright (C) Sergey Solyanik.
// This file is subject to the terms and conditions of the Microsoft Public License (MS-PL).
// See for more details.
using System;
using System.Drawing;
using System.Reflection;
using System.Windows.Forms;

namespace Graph
    class Graph : Form
        public delegate double Function(double x);
        public Function F;
        public double X1;
        public double X2;
        public double Y1;
        public double Y2;

        private double stretchX;
        private double stretchY;

        private int ToScreenX(double x)
            return ClientRectangle.Left + (int)((x - X1) * stretchX);

        private int ToScreenY(double y)
            return ClientRectangle.Bottom + (int)((y - Y1) * stretchY);

        private double ToPlaneX(int x)
            return X1 + ((double)(x - ClientRectangle.Left)) / stretchX;

        private double ToPlaneY(int y)
            return Y1 + ((double)(y - ClientRectangle.Bottom)) / stretchY;

        protected override void OnPaint(PaintEventArgs e)

            stretchX = (double)(ClientRectangle.Right - ClientRectangle.Left) / 
                (X2 - X1);
            stretchY = (double)(ClientRectangle.Top - ClientRectangle.Bottom) /
                (Y2 - Y1);

            if (Math.Sign(X1) != Math.Sign(X2))

            if (Math.Sign(Y1) != Math.Sign(Y2))

            for (int x = ClientRectangle.Left; x < ClientRectangle.Right - 1; ++x)
                    x + 1,
                    ToScreenY(F(ToPlaneX(x + 1))));

    class Program
        private static Graph.Function SelectFunction()
            Console.WriteLine("Available functions:");
            Type t = typeof(Math);
            MethodInfo[] m = t.GetMethods();
            for (int i = 0; i < m.Length; ++i)
                if (m[i].IsPublic && m[i].IsStatic && m[i].ReturnType == typeof(double))
                    ParameterInfo[] p = m[i].GetParameters();
                    if (p.Length == 1 && p[0].ParameterType == typeof(double))
                        Console.WriteLine("    " + m[i].Name);

            while (true)
                Console.Write("Select a function to plot: ");
                string response = Console.ReadLine();
                for (int i = 0; i < m.Length; ++i)
                    if (m[i].IsPublic && m[i].IsStatic && m[i].ReturnType == typeof(double))
                        ParameterInfo[] p = m[i].GetParameters();
                        if (p.Length == 1 && p[0].ParameterType == typeof(double))
                            if (m[i].Name.Equals(response))
                                return (Graph.Function)
                                    Delegate.CreateDelegate(typeof(Graph.Function), m[i]);

        private static double GetNumber(string prompt)
            for (; ; )
                string response = Console.ReadLine();
                double result;
                if (double.TryParse(response, out result))
                    return result;

        static void Main(string[] args)
            Console.CancelKeyPress += delegate

            Console.WriteLine("Press Ctrl-C to quit.");

            Graph g = new Graph();

            g.F = SelectFunction();
            g.X1 = GetNumber("Abscissa lower boundary: ");
            g.X2 = GetNumber("Abscissa upper boundary: ");
            g.Y1 = GetNumber("Ordinate lower boundary: ");
            g.Y2 = GetNumber("Ordinate upper boundary: ");