The Free Software Foundation, as a definition of software that's assumed to be practical. They say that, unlike BSD-like licenses, the GPL license gives practical effective freedom. That's just wrong, because effective freedom depends more on technical value than on any license, as I'll demonstrate below. An open license is just a way to help gaining effective freedom, but is not enough and, to some extent, may not be necessary to gain an acceptable level of freedom.
First, let's define what "effective freedom" means. A software end user is totally free, if he can effectively perform anything he may want, the way he wants, as far as it's technically possible, in a reasonable amount of time. The way he wants, may include privacy, security and reliability requirements. This includes not having to trust external entities, especially if those entities are not trustworthy.
A software that can do what the user wants at the cost of days or weeks of work, or hundreds of dollars doesn't provide very effective freedom to access the feature. At worst, if the cost is not affordable for the user, either in time or money, the freedom becomes theoritical rather than effective. The case of other users doing the work for you is treated separately.
With enough time, you can do whatever you want on any system. At worst, you've to redevelop the whole system from scratch. Consequently, the only effective freedom difference between two systems is the time or money you need to invest to get what you want.
Just think about a world where no free software exist at all. Now, I create a 256 bytes open source "hello world" program in x86 BIOS asm. It's a free OS, technically not far away from GNU hurd. Does it bring freedom? No, because, it's basically a blank sheet, and the end user cannot perform anything without days & days of programming. In that sense, anything is better than this software. So, what's the difference with a full GNU/Linux OS? Technical value. That's all.
On the other hand, the BIOS is already a piece of software that gives some level of freedom. Even though its source code is usually closed, its interface is well-documented and allow users to program a system once and run it everywhere.
A user may depend on a feature to do what he wants. Free software don't guaranteee that this feature will be available forever, because, in the end, software gets deprecated and dropped even though no piece of new software replace many of the existing features users may rely on. To some extent, the user is free to keep his old software, but this is made very hard, because, due to the insane dependencies of some OSes (e.g. Linux), it may be awfully hard (hours of headaches) to upgrade some software while keeping the old.
Even if the user accepts that he won't upgrade or use new software (which is loss of freedom), and keep all his old software bundled on his computer, when he upgrades his computer, he'll have to upgrade his kernel and X.org to get the new hardware supported. On Linux, upgrading the kernel without touching the user space is quite easy, because kernels are backward compatible (which is a TECHNICAL value, not due to the license), but, X.org is harder to upgrade without upgrading any user-space stuff. Eventually since X.org has relatively few dependencies (again, technical value), compiling from the sources, it's possible to get it without destroying all the user-space. That's true to some extent. I don't think X.org 1.9 can run on old libc.so.5 systems. This make it possible to execute unsupported 7 years old software on a new machine with one day of human work. This could be MUCH harder if the system was technically inferior. e.g. Kernel bound to user-space or GLIBC not backwards compatible. Eventually, you cannot make it work without changing source code. This is when OSS code license comes to rescue. 1) You can modify the source code of the program you want to use. Depending on the level of change of user space API, this may requires days or weeks. Even if you have enough time to do that, you won't be able to do that for 20 programs you need. 2) You may statically compile the program in an old user-space environment you get working on an emulator, and then, copy the static binary on your real environment. That may be very tough if the program has many dependencies on shared libraries and daemons. You may have to rename daemons if they conflict with existing. At worst, this may not be possible without refactoring if the program really load dynamic plugins (e.g. PAM-like system). This may take days or weeks of work. Again, this time depends on technical value of the system, but not the software licenses. 3) Run the program in a virtualized machine. In that case, you don't need the source at all.
If the support of a good program is ended, a community fork may be created. You just cannot rely on it. This happens exceptionnally and only for huge projects like KDE 3 (forked as Trinity) or OpenOffice.org (forked as LibreOffice). This didn't happen for XFCE3 or GNOME1, notably nautilus 1 which was much better than nautilus 2.x in many ways: Resizable and movable icons in every folder, floating toolbars and menus. I managed to run nautilus 1.0.6 on my 2011 Gentoo system (after days of collecting, compiling and fixing bug in its dependencies), but it runs unstable, and I cannot guarantee that it'll work in a few years.
Similarly, relying on other users to implement the features you want is ineffective. There's absolutely no guarantee that they will do, and, in my experience, commercial software (be it OSS or not), is rather more feature-complete than community driven software.
There is two ways to customize software. Either you change the source code, which may require hours or days, or you use the UI. For example, all GTK1 applications, make possible to move menus and toolbars around. That way, you can get the GUI layout you want in a few seconds. Vim, Emacs, OpenOffice.org and MS Office, all provide easy ways to significantly change or add features, in a few seconds or minutes, sometimes hours, but always MUCH less than you would need in less advanced programs such as Elvis or KOffice. Another example: It's very easy with GTK+ to change menu hotkeys without changing any application source code.
One of the most basic thing you can do on a good modular system, is to select the components you want. Each component has a standard API, and may be configured or substitued without breaking the rest of the system. So, there's a "choice point", at each module. This helps very much to build a system that does what you want, the way you want.
One of the best examples available on Linux is the window manager. Without changing any line of code of any GUI application, you can choose the window manager that best suits your needs. On the other hand, the GUI toolkit (e.g. GTK vs Qt) has no abstract common API, so that, the choice is much more limited. Only the developer can choose Qt or GTK, the end user can only choose between some Qt or GTK applications, but cannot get the best of both worlds. i.e. Assuming claws-mail is, for non-toolkit related reasons, the e-mail client he likes best, he cannot easily make it a Qt application. To do that, he would have to dramatically change the source code. That's not effective freedom. Again, the freedom of choice a technical issue.
Defining API between components, OSS or not, make it possible to replace any component with a compatible one, OSS or not. This also make it possible to reuse a component without modifying the source code, and benefit from improvements. This way, the Internet Explorer Trident rendering engine can be used by other browsers such as Maxthon or Avant Browser. This provides much choice freedom even in a closed-source environment. The only exception to that rule is copyleft software with contaminous licenses that make plugins redistribution restricted.
Similarly, a stable API is necessary to provide a reliable replacement/change of some modules without destroying the whole system. Basically, if you change some module at version X and that version X+1 is not compatible, you have to redevelop the change for the new version is you use any piece of software that directly or indirectly depends on that module.
Compiling software that's not supported by your distro, from source tarballs, requires some C language knowledge and some autoconf arcane knowledge and some amount of time (minutes or hours). If you don't have this knowledge, you'll be restricted to your distro packages with some ridiculous conflicts (e.g. gnuplot may be in conflict with KDE), which provides much less freedom than you get with some closed systems like Windows. Consequently, freedom of choice comes at the cost of a significant amount of time and knowledge. A technically better systems with more packages or more automatic tarball install, and fewer conflicts provides more freedom.
Because they make it possible to do pretty anything in a small or medium amount of time. The more programming languages are supported, the more programs can be ported from other systems.
Because, as more software is available, more good software, and more software that does what you want, is available on the platform. Backward compatibility helps much in that case. Young systems like Windows Phone 7 or Android lack quality software and are not much more useful than a blank sheet.
Until the software industry fix that, anybody has to get the restrictions software patents provide, even more in OSS software since commercial software can pay patent licenses that OSS software usually cannot.
Android has a pretty restrictive Java-based API, making it hard to port software, is very young with a small library of software, mainly containing poor applications, is poorly customizable, has some level of modularity that's hidden from the developer as the native API is not directly accessible, usually has awful support, as drivers are bound to a machine and the end user is dependent on the OEM to provide Android updates, which means that, most of the time, it's not possible to upgrade Android at all. Combined with a very fast evolution, it's easy to get "blocked" on Android 1.5, with very little software available. Even if the GPL license legal requirements were met (kernel source code available), upgrading Android with user's own little hands, would be very tough. As everywhere, software patents bite.
Android may be OSS, but I would rather use Microsoft Windows any day in my life, because I want to be master of my computer.
Did you choose GRUB 2, GNOME 3, KDE 4, systemd, Wayland? Most Linux users just have to accept it because software of the past won't be available forever bundled modern drivers. With time or money, you can push the dead line later, but, can you really avoid it?
OSS makes any piece of software more free, because the end user or community has the possibility to fix the bugs or missing features and adapt it to their needs, but, it's not panacea. Crap software remains crap. Good closed source software can provide more freedom than poor OSS software.
One of the best feature of OSS software is the possibility of continuing the development of software whose development is stopped by their main contributors, be they commercial or not. This is not guaranteed, but very probable for large and popular software. On the other hand, community-driven OSS software tends to be more volatile and has none of the quality or support requirements of commercial software, making it dangerous to rely on them for extensive periods of time. So, for long term support, it's a tie.
I hope I've broken most of the myths of OSS software as it's often presented as panacea that automatically "fixes" bad code. In the end, stability, compatibility, features, modularity, configurability, security matter, and are not guaranteed by the OSS model. Open source is a feature like another.