Write your post here.
Paul Graham wrote and essay titled Beating the Averages in which he describes the concept of the Blub Paradox. The concept described in the essay is that programming languages vary greatly in power and the hypothetical language Blub falls smack dab in the middle of the spectrum.
If you are a Blub programmer and you are looking at less powerful languages, you know you are looking down the spectrum. The Blub programmer would say to a programmer of a lesser language: How can you do anything in language X… It doesn't even have feature Y? However, if you are a Blub programmer looking up to more powerful programming languages in the spectrum, all you can see are weird languages that look equivalent in power to Blub… at least on the surface.
The Sapir-Whorf Hypothesis
Many computer scientists and computational linguists believe that the hypothesis also applies to computer languages. Ken Iverson, the creator of APL, believed that the hypothesis applied to programming languages, and this was the central theme of his Turing Award lecture titled Notation as a Tool of Thought. This is also what Paul Graham is alluding to in his essay.
The Sinclair Basic
My first computer was a ZX Spectrum clone sold in Brazil circa 1985. Consequently, my first computer language was Sinclair BASIC. It had no concept of a function that languages such as C and Python depend on. How could anyone do anything in Sinclair BASIC without a fundamental feature such as functions?
What ends up happening is that you try to work around that limitation unconsciously. Sinclair BASIC provided the GO SUB construct with which you can effectively jump to a line of code that contains a subroutine, or a piece of code that may be called from several different locations within the program. The only problem with that type of construct is that it can lead to a severe case of spaghetti code in your program.
My point really is that there is a direct correlation between my productivity and the amount of screen real estate I have at my disposal. There are those who find a multiple monitor setup hugely wasteful and are not able to see the benefits of such setup. I've been there myself.
But what really happens is that you work around the limited screen area unconsciously… much like what you do when working with a restrictive programming language. It takes a fraction of a second to Alt+Tab through a few applications to go from Emacs to a browser window to check my work. But over time, these small actions become a huge pain and break the flow of coding. This is the Blub Paradox of screen real estate.
My current setup at home consists of a 21" and a 19" ViewSonic LCDs with 1680x1050 and 1440x900 resolutions respectively. This setup is in dire need of an upgrade and I base that on the upgrade I recently got at work: a Dell 27" with 1920x1200 resolution and a NEC SyncMaster 21" in portrait mode with 1200x1600 resolution. All of a sudden my home setup is so puny.
In essence, once you have a X monitor setup, it is really easy to look down on the screen real estate spectrum and understand how lesser setups are inferior to X, but not so much when you are looking up the spectrum. It took me about a week of interacting with the new work setup to truly understand the need to upgrade at home. Now I just need to come up with the money for the upgrade.