Why I use and hate Android

Friday, November 6th, 2015

I was lucky enough to be gifted an original iPhone within the first year of them being available. I thought it was amazing, because it was (and let’s be honest, basically still is). But when it started physically wearing out, I ended up buying an Android phone, and have used Android since.

My reasoning was that Android is more like a “real” computer operating system. It exposed true multitasking to the programmer, so I as a user could run any combination of arbitrary applications at once. And you can install apps from whatever app store you want, or manually without any store in the way at all. Plus it’s open source(ish)! As someone who is now studying computer science, these kinds of things seemed important. Clearly Android is not as polished as iOS, but I came to understand why people used Windows in the 90s, or desktop Linux today; I figured that as a technical person I could handle the rough edges, no problem.

As my current Android phone is now also wearing out, it’s about time to get a new one, unfortunately. There’s a top-of-the-line Android model I’ve got my eye on that offers great specs and a much lower price than even an older generation iPhone. But after using the Windows 95 of smartphones for a few years, I have also come to remember why the shrinking number of Mac and Amiga users could remain so smug in the 90s: like Windows before it, Android kind of sucks.

If you had shown me my current broken phone 20 years ago, I would have flipped my shit at this Star Trek technology that was soon to become available to me. It’s an unbelievable future supercomputer, in your pocket! But…our standards have changed in 20 years. Even compared to my 7-year-old dual-core laptop that is also barely hanging in there, this phone is really not very powerful. I wanted my phone to act like a “real” computer, and it turns out that I expect a real computer to do a lot more than I did 20 years ago. We take bulletproof multitasking for granted now, with modern Windows descended (sort of) from VMS, modern Macs descended from BSD/Mach, and modern Linux desktops even being somewhat usable.

But now that I’m studying operating systems and have thought more about the interaction between hardware and software, I can tell you that some fundamental things have not changed: one CPU core can run one task at a time, main memory is slower than caches and registers, hitting the disk is painful, and switching between tasks is not some free magic with no overhead. The fact that your desktop/laptop computer appears to seamlessly and effortlessly run many flashy user applications at a time is a complete illusion made possible by the fact that PC hardware is really powerful now; it is actually running one thing at a time and switching between all of them hundreds of times a second so you don’t notice.

And here’s where Android kind of sucks. It does “real” multitasking all right, but you sure can’t take it for granted. Mobile CPUs use less power partly because they are just slower; phones (until recently) didn’t have anywhere near as much RAM as PCs; and even though smartphones use flash storage, it’s not exactly your desktop SSD in terms of performance. This is how you get situations where Android appears to slow to a crawl or freeze entirely for embarrassingly long periods (a couple seconds for a new phone, up to minutes for my old one).

Mobile RAM is slow enough as it is, and having to frequently swap in and out to a slow disk is really bad news. iOS was designed not to swap at all (I haven’t kept up with it so I don’t know if that’s still the case). Not running as many tasks at once means you move stuff in and out of RAM less often, your cache probably stays hotter, and you waste fewer CPU cycles context switching between user and kernel space and switching between tasks. As of a few years ago (and maybe still?), mobile hardware was legitimately not ready to do this kind of stuff, and iOS probably made the right choice in limiting user-level multitasking to a few specific things that are handled through tightly controlled OS services (obviously there are tons of invisible tasks actually running on both Android and iOS).

As it has for the last five decades, the semiconductor industry is charging ahead, delivering literally exponential progress in speed and capacity. Phones and tablets are getting more powerful. The Android phone I’m considering has more RAM than some laptops. So, is this problem solved? Well, I thought it was last time I bought a phone, and here we are now. As these things improve, we expect our devices to do more and more, and that new phone still doesn’t have as much RAM as my particular laptop, and its four cores still probably aren’t as powerful as the laptop’s two. The phone probably works pretty well straight out of the box, but I wonder how long it will stay that way.

In response to this new hardware situation, what has iOS done? Oh, they recently added the ability to run exactly two apps side by side. That…seems like a pretty good number. So the question is, is it worth paying more to do less? As a student, I’m probably not actually going to buy anything for a while, and Android sure is cheaper. I do still like the idea of being able to mess with it, and I’m becoming more able to do that now. But that’s just me – whenever anybody asks, I tell them to get an iPhone.

Leave a Reply

Your email address will not be published. Required fields are marked *