What’s this about Net Neutrality?

As a professional in the field of web hosting, I see the effects of our current lack of Net Neutrality every day. It’s not immediately apparent to the average individual, but companies that are hardest hit (Netflix for example) are starting to change that, such as with their recent decision to start calling out ISPs in their “we’ve got to reduce your video quality” message. In response, Verizon has come back with legal threats and claims that the issue is the path Netflix chooses to use to get their content to Verizon’s network. This is only true in that Netflix could “choose” to cave in to Verizon and pay more to directly connect to their network. To help you understand why, I’ll need to explain a bit about how the internet works.

On the internet, there are three types of providers. The first is the service providers like Google, Netflix, GoDaddy, and any number of other service networks. Second, you have the “interconnection” networks who you’ve probably never heard of but make up the backbone of the internet. Last, you have so-called “eyeball networks” who deliver service to the average consumer, the “eyeballs” at the other end of the internet from the servers. The first and last groups (service providers and eyeball networks) pay the middle group for access to the internet, and generally for more than one connection, to which more are added each time bandwidth needs require it.

Now as a service provider, when I start to get to ~80% of my connection capacity, I add additional links (fiber optic cables connecting my equipment to the interconnection network). The costs of these links are generally proportional to the amount of money I make off of the customers requesting this data (my customers). The ISPs like Comcast and Verizon are also paying these interconnection networks for access to the internet, likely at similar or even lower cost-per-connection rates compared to what I pay. In this way, we’re equals and everything is neutral. Now, lets say that I’ve been upgrading at that 80% capacity number on a regular schedule (this is what Netflix does) but my customer’s ISP has just been watching their upstream links hit 90%, 95%, 100%, and finally start losing packets. This is because your ISP thinks that by doing nothing, they’re not “throttling”, so they’re not breaking any rules. What’s happening here is that Verizon’s customers are requesting more data than Verizon wants to pay to be able to handle, so they want to charge Netflix and the other service providers to connect directly to them instead, thus bypassing the interconnection network.

The eyeball network is trying to get money at both ends of their pipe, once from their customers, and again from the companies their customers use. This extortion shakedown is only possible because you, as the customer, can’t see the state of Verizon’s external connections and they don’t think you’re smart enough to realize what they’re doing. Netflix is trying to inform you (though I’d have been more specific about the issue being Verizon’s interconnections, not its core network) and Verizon doesn’t like it at all. Fight back, contact your congressperson, tell them we need Net Neutrality. If you don’t, the monopoly currently providing you internet service will continue to do this until services like Netflix are impossible to run at a reasonable cost.

Installing Graphite with Ceres and Megacarbon

Graphite has changed much in the last year, though you wouldn’t know it from reading the official site. Much work has gone in to the new metric writer, Ceres, to improve on the already fantastic Whisper backend which has been the default in the past. Ceres allocates space as metrics are recorded, so your storage needs will scale more directly with use than with Whisper. Additionally, the various carbon daemons (carbon-cache, carbon-relay, etc.) have all been merged into a single daemon called “Megacarbon”. This new daemon supports all the same functionality but streamlines the configuration process. These instructions are for Ubuntu Server 13.04, but you can probably adapt them to your distro of choice if you have sufficient experience.
To start with, you’ll need to get some tools and libraries to help you configure everything. These will need to be installed system-wide:

sudo aptitude install git python-pip python-dev libcairo2 libffi-dev memcached uwsgi nginx uwsgi-plugin-python uwsgi-plugin-carbon
sudo pip install -U pip # Get the latest pip
sudo pip install -U virtualenvwrapper # Get the latest virtualenvwrapper

Now that you’ve got the requirements installed, you’ll need to create a user for graphite:

sudo adduser graphite

You’ll be prompted for details about the user, enter anything you wish. I recommend a 64 or more character password, and that you do not bother to record it. With sudo access to the box, you will not need it. The next step is creating the graphite storage location. I prefer /opt/graphite for this:

sudo mkdir /opt/graphite
sudo chown -R graphite.graphite /opt/graphite
sudo chmod 751 /opt/graphite # More on this later

Now switch to your graphite user and configure virtualenv wrapper:

sudo su - graphite
source /usr/local/bin/virtualenvwrapper.sh
echo "source /usr/local/bin/virtualenvwrapper.sh" >> .bashrc

Once virtualenvwrapper is configured, you can create the new virtualenv and start installing graphite:

mkvirtualenv -a /opt/graphite graphite # This should change your directory to /opt/graphite

We need a directory for our sources:

mkdir src
cd src/

From the src folder, we can start installing graphite components:

export GRAPHITE_ROOT=/opt/graphite
git clone git://github.com/graphite-project/carbon.git -b megacarbon
cd carbon/
pip install -r requirements.txt
python setup.py install
cd $GRAPHITE_ROOT/src/
git clone git://github.com/graphite-project/ceres.git
cd ceres/
pip install -r requirements.txt
python setup.py install

Next, configure Ceres’ datastore:

ceres-tree-create /opt/graphite/storage/ceres

Now we need to edit config files. Start by copying the example configs:

cd $GRAPHITE_ROOT/conf/carbon-daemons/
cp -r example/ writer
cd writer/

The three files that you should confirm are configured as you desire are daemon.confwriter.conf, and db.conf. It’s a good idea to read the others as well. The parts you must change are:

in daemon.conf:
USER = graphite

in db.conf:
DATABASE = ceres
LOCAL_DATA_DIR = /opt/graphite/storage/ceres/

The other options can remain the same should you so desire. Now we install the web app:

cd $GRAPHITE_ROOT/src/
git clone git://github.com/graphite-project/graphite-web.git
cd graphite-web/

Unfortunately, I wasn’t able to get pycairo to install properly inside a virtualenv, so I used a drop-in replacement library called cairocffi which works flawlessly as far as I can tell. Remove pycairo from the requirements, and install the remainder along with cairocffi:

sed -i '/cairo/d' requirements.txt
pip install -r requirements.txt
pip install cairocffi

Next, we have a small hack to ensure cairocffi gets used instead of pycairo:

echo 'from cairocffi import *' > /home/graphite/.virtualenvs/graphite/lib/python2.7/site-packages/cairo.py

Finally, install graphite-web:

python setup.py install

Now you need an entry point for your application:

cd $GRAPHITE_ROOT/webapp/

Now create wsgi.py in this folder and populate it with the following text:

import os, sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'graphite.settings'

import django.core.handlers.wsgi

app = django.core.handlers.wsgi.WSGIHandler()

This file will be the module imported by uwsgi to run your app. You should take the opportunity now to configure your database. You can create a superuser now as well, who will be the user you log into the application as:

cd $GRAPHITE_ROOT/webapp/graphite/
python manage.py syncdb

We’re nearly there. Now we get everything starting on boot and configure nginx/uwsgi:

exit # So we're not running as the graphite user

Create and edit /etc/init/carbon-writer.conf and place the following inside (change eth0 to your adapter name from ifconfig):

description "carbon writer"

start on (net-device-up IFACE=eth0 and local-filesystems)
stop on runlevel [016]

expect daemon

respawn limit 5 10

env GRAPHITE_DIR=/opt/graphite
exec start-stop-daemon --oknodo --pidfile /var/run/carbon-writer.pid --startas $GRAPHITE_DIR/bin/carbon-daemon.py --start writer start

Now start the carbon daemon and ensure it’s listening:

sudo start carbon-writer
sudo netstat -plan | grep :2003

You should see a process listening on port 2003. Once that’s up, you can configure the webserver. Create and edit /etc/nginx/sites-available/graphite and enter the following:

server {
  listen 80;
  keepalive_timeout 60;
  server_name graphite.example.com;
  charset utf-8;
  location / {
    include uwsgi_params;
    uwsgi_param UWSGI_CHDIR /opt/graphite/webapp/;
    uwsgi_param UWSGI_PYHOME /home/graphite/.virtualenvs/graphite/;
    uwsgi_param UWSGI_MODULE wsgi;
    uwsgi_param UWSGI_CALLABLE app;
    uwsgi_pass 127.0.0.1:3031;
  }
  location /content/ {
    alias /opt/graphite/webapp/content/;
    autoindex off;
  }
}

Naturally you should substitute example.com for your actual domain. Now we ensure nginx can read the content directory (this is why we set /opt/graphite to permission 751 earlier):

sudo chown -R graphite.www-data /opt/graphite/webapp/content
sudo chmod 751 /opt/graphite/webapp /opt/graphite/webapp/content

Now we create a configuration file for uwsgi. Create and edit /etc/uwsgi/apps-available/graphite.ini and enter the following:

[uwsgi]
plugins = python
gid = graphite
uid = graphite
vhost = true
logdate
socket = 127.0.0.1:3031
master = true
processes = 4
harakiri = 20
limit-as = 256
wsgi-file = /opt/graphite/webapp/wsgi.py
virtualenv = /home/graphite/.virtualenvs/graphite
chdir = /opt/graphite
memory-report
no-orphans
carbon = 127.0.0.1:2003

Note the carbon directive, which tells uwsgi to graph some metrics about accesses, memory, etc. This will be our test data to ensure everything works.

By default in Ubuntu, uwsgi will run as the www-data user and cannot change privileges to the graphite user. To work around this, we can modify the default configuration at /usr/share/uwsgi/conf/default.ini and remove the uid and gid lines. This will allow the graphite.ini file we just created to drop privileges to the graphite user.  As an important security consideration here, you should ensure that all of your individual uwsgi configs specify a uid/gid if you do this.  Never run exposed daemons as root!

We can now enable the application and load the configurations:

sudo ln -s /etc/nginx/sites-{available,enabled}/graphite
sudo ln -s /etc/uwsgi/apps-{available,enabled}/graphite.ini
sudo service nginx restart
sudo service uwsgi restart

If everything went well, you should now be able to reach your site at the url you configured. When you expand the Graphite group you should see a uwsgi group available that contains metrics about the running instance of uwsgi (graphite itself).

Update July 1st, 2013:

Currently, the ceres maintenance plugins are not included when installing, so it’s not possible to keep your ceres databases cleaned properly (see this issue).  For now, you need to do the following as the graphite user:

cd $GRAPHITE_ROOT/
cp -rfp src/carbon/plugins/maintenance plugins/maintenance

Then, edit your crontab (crontab -e) and add this line:

7 1 * * * /opt/graphite/bin/ceres-maintenance --configdir=/opt/graphite/conf/carbon-daemons/writer/ rollup

Update 2: March 17th, 2015

Much has changed since I wrote this blog post. Please see akuzmin’s comments below for things he had to do differently from the above instructions. Thank you, akuzmin!

Writing Python like a Jedi

Jedi is an auto-completion library for Python code that I’ve so far really enjoyed using in vim. It’s able to analyze your code (and anything you import) to guess what you mean when you invoke it (Ctrl+Space by default). Out of the box with jedi-vim, the configuration requires a bit more keypressing than I’m generally happy with, so I used Supertab as suggested. It’s advanced enough to know the types of your variables and the argument names for your function calls. I’ve included instructions below that should get you up and running fast.

  1. If you don’t already use it, configure pathogen.vim:
    1. Get pathogen:
      mkdir -p ~/.vim/autoload ~/.vim/bundle; \
      curl -Sso ~/.vim/autoload/pathogen.vim https://raw.github.com/tpope/vim-pathogen/master/autoload/pathogen.vim
    2. Only do this if you don’t have an existing vimrc file:
      echo "syntax on
      filetype plugin indent on" > ~/.vimrc
    3. Add pathogen to your .vimrc file:
      sed -i '1i\
      call pathogen#infect()' ~/.vimrc
  2. Install jedi-vim and jedi:
    cd ~/.vim/bundle/; \
    git clone git://github.com/davidhalter/jedi-vim.git; \
    cd jedi-vim; \
    git submodule update --init
  3. Install Supertab:
    cd ~/.vim/bundle/; \
    git clone git://github.com/ervandew/supertab.git
  4. Configure Supertab:
    echo 'let g:SuperTabDefaultCompletionType = "context"' >> ~/.vimrc
  5. Optional: Disable auto-complete on . (bothers me, up to you)
    echo 'let g:jedi#popup_on_dot = 0' >> ~/.vimrc

At this point, you can begin to type a member of a library and hit <Tab> to automatically suggest ways to finish (or just type <Tab> after a . to show all available options).

MySQL Query Parser

In my day to day activities, I often need to know who’s being especially bad at MySQL on multi-user systems. Sure, running SHOW PROCESSLIST; a few dozen times might give me an idea, but often you just have to start logging and hope for the best.

NO MORE I SAY!

I built a very basic MySQL query parser in Python that takes a MySQL 5.0/5.1 log file (any size, it’s rather memory efficient) and returns the total number of SELECT, UPDATE, and INSERT queries, tallies their Queries Per Second, and sorts by selects to give a final output of just who’s been bad and in what way.

I’ll let the help output do the talking:

redkrieg@h4xs3rv3r [~]$ queryparser.py --help
usage: queryparser.py [options] [filename]

[filename] is optional and defaults to stdin.

 

options:
-h, --help show this help message and exit
-v, --verbose Print info on stderr (default: True)
-q, --quiet Suppress stderr output
-o FILE, --output=FILE
Write output to FILE (default: stdout)
-r REGEX, --regex=REGEX
Tally arbitrary REGEX string (slow)

Supports arbitrary regex searches and takes a best-guess approach to identifying the appropriate user when it’s not obvious. Without the regex option, it can chew through a 1GB log in about 30 seconds on my VPS while staying in a single execution thread.

Download the latest version here

Adding notifications to finch

So, we use XMPP for our internal chat system at work, but I hate pidgin and empathy’s not much better. Naturally I searched for command line alternatives and the least offensive one I could find was finch, which admittedly uses libpurple on the backend, so it’s really pidgin, but at least it’s in a terminal now. Of course, I lost all my notifications of incoming messages, which isn’t cool, so I cooked up a simple script and call it instead of playing sound through finch. In finch’s “sounds” menu, simply change “Automatic” to “command”. In ubuntu, you’ll need two new packages:

sudo aptitude install libnotify-bin mplayer

And you’ll need this script, and to remember the path to it:

#!/bin/bash

latest_line=`find ~/.purple/logs/jabber/ -mtime -1 -printf "%T@ %Tx %TX %p\n" | sort -n -r | head | cut -d ' ' -f 2- | awk '{print $NF}' | head -1 | xargs tail -1 | sed -e 's#<[^>]*>##g'`
mplayer $1 >/dev/null 2>&1 &
notify-send "`echo $latest_line | cut -d ':' -f 3 | awk -F ')' '{print $2}'`" "`echo $latest_line | cut -d ':' -f 4-`"

Pianobar – An open source Pandora client

I’ve been using pianobar for about two weeks now and have yet to experience any crashes that plagued my usage previously. It is by far the most efficient, stable way to use Pandora. In Ubuntu, you can install it from the software center or via “apt-get install pianobar”. I also recommend installing libnotify-bin if you want to use notifications on song changes (instructions below).

Once installed, you can either run it straight away by opening a terminal and typing “pianobar”, or you can set up some configurations first to enhance your experience. I use a shell script to catch events from the player and display notifications with the song artist and title.

For configuration, edit ~/.config/pianobar/config in your favorite editor. Here is my configuration (note that I pay for Pandora One, if you do not, you can’t specify mp3-hifi as your audio_format):

event_command = /home/myuser/bin/pianobar-notify
user = user@domain.com
password = yourpassword
audio_format = mp3-hifi

The script at /home/myuser/bin/pianobar-notify will be called any time an “event” is sent. See the man page for details on how you can extend this. Below is the pianobar-notify script I use (don’t forget to chmod +x):

#!/bin/bash
# create variables
while read L; do
k="`echo "$L" | cut -d '=' -f 1`"
v="`echo "$L" | cut -d '=' -f 2`"
export "$k=$v"
done < <(grep -e '^\(title\|artist\|album\|stationName\|pRet\|pRetStr\|wRet\|wRetStr\|songDuration\|songPlayed\|rating\|coverArt\)=' /dev/stdin) # don't overwrite $1... case "$1" in songstart) # echo 'naughty.notify({title = "pianobar", text = "Now playing: ' "$title" ' by ' "$artist" '"})' | awesome-client - # echo "$title -- $artist" > $HOME/.config/pianobar/nowplaying
# # or whatever you like...
notify-send "Pianobar - $stationName" "Now Playing: $artist - $title"
;;

# songfinish)
# # scrobble if 75% of song have been played, but only if the song hasn't
# # been banned
# if [ -n "$songDuration" ] &&
# [ $(echo "scale=4; ($songPlayed/$songDuration*100)>50" | bc) -eq 1 ] &&
# [ "$rating" -ne 2 ]; then
# # scrobbler-helper is part of the Audio::Scrobble package at cpan
# # "pia" is the last.fm client identifier of "pianobar", don't use
# # it for anything else, please
# scrobbler-helper -P pia -V 1.0 "$title" "$artist" "$album" "" "" "" "$((songDuration/1000))" &
# fi
# ;;

*)
if [ "$pRet" -ne 1 ]; then
notify-send "Pianobar - ERROR" "$1 failed: $pRetStr"
elif [ "$wRet" -ne 1 ]; then
notify-send "Pianobar - ERROR" "$1 failed: $wRetStr"
fi
;;
esac

Have fun enjoying your music without a web browser!

Deep Fried Philly Steak

In a stroke of pure genius this weekend, my brother created the Deep Fried Philly Steak. For the benefit of mankind, I shall hereby document the process:

Use a frying pan with a touch of olive oil to sautee some vegetables. We used onion, green pepper, mushrooms, and minced garlic, like these:

They might not look so great raw, but fry these bad boys up...

When the vegetables are sauteed to perfection, add sliced beef. We used steak-ums, but I imagine this would be better with something fresher. Fry ’til the beef is done, then add some cheese blend. It’ll look something like this, but more:

Delicious meat... and fat

Next, roll out some roll-up pizza dough, we used the Pillsbury Classic Pizza Crust for ours. Place six slices of provolone on the rolled-out pizza crust and cut in to six squares. Place a ball of your fried treat on each of the slices of provolone:

This is what you're looking for.

Finally, you can top with some mayo if you truly hate your vascular system:

It's a heart attack in the making.

Pull the four corners up to meet in the center so you have four points sticking out, then grab these and fold them up as well. Make sure there aren’t any gaps through which you can see steak, you should have a ball of stretched dough with no holes. Drop this in to the deep fryer, which you should have pre-heated to 375 degrees Farenheit filled with vegetable oil (or lard I guess, but I wanted to be alive to write this).

You can tell it's done when it floats, but I let it go 'til it's good and brown

When it’s done (golden brown and floating, you may need to turn them to get an even fry) pull them out and place them on some paper towels to soak up some of the liquid death in which they sit. You now have the most amazing treat ever created. Enjoy.

Philly steak, in convenient ball form.

Compiling heimdall on Ubuntu 10.10

Heimdall is an open source replacement for the Samsung Galaxy S flashing utility called Odin. I like this because I can’t stand windows, so if I were to have “bricked” my Vibrant, I’d have been pretty well screwed until I found a friend with a windows machine. Now, there’s an option for us “regular” folks that don’t like to pay the microsoft tax just to use our hardware.

First things first, you’ll need some development packages, so run this command from the terminal:
sudo aptitude install build-essential libusb-1.0-0-dev

Next, you’ll need the hiemdall source from here. You’ll want to get the linux source archive.

Now unzip the source to a folder in your home directory, I use ~/source/. There should now be a folder called “Heimdall-Source”.

Change directories to Heimdall-Source and run the following commands:
./configure
make
sudo make install

You now have heimdall installed. The README file in the Heimdall-Source folder has information on use of the tool, but the important things to know are that you must run it with sudo, you’ll need to untar any .tar or .tar.md5 files you have in the odin bundle (tar xvf filename.tar), and include only the parameters from the source that match files you actually have in your odin tarball (if there’s no Spl.bin in your tarball, don’t include the “–secondary Sbl.bin” part of the heimdall command). If your odin bundle contains all of the files heimdall can use, your command would look like this:
sudo heimdall flash --pit s1_odin_20100512.pit --factoryfs factoryfs.rfs --cache cache.rfs --dbdata dbdata.rfs --boot boot.bin --secondary Sbl.bin --param param.lfs --kernel zImage --modem modem.bin

Minimalist fcgid with custom php.ini on cPanel

I recently changed to using fastcgi on my personal cPanel server (11.28, CentOS) and thought others might benefit from my findings, as I wasn’t able to find any complete configuration tutorials.

I was able to replicate the functionality I enjoyed from SuPHP (per directory php.ini configurations) with a simple wrapper script that’s called using an “Action” parameter in the .htaccess file.

To start, you’ll need to run EasyApache:
/scripts/easyapache
I have disabled “Mod SuPHP” from the easyapache configuration and enabled “Mod FCGID”. This was the only change required to enable FastCGI support using the system php.ini configuration.

I reviewed many forum posts on the topic of local php.ini configuration with fcgid, but none really focused on the per-user aspect and how it can be user definable. I accomplished this with a very simple wrapper script for php-cgi that sets the PHPRC shell variable to the desired path, then a handler and action in .htaccess to call the script for handling php files. This allows for different directories to use different php.ini configurations by creating a different wrapper for each php.ini required.

First, edit the wrapper with your favorite text editor:
nano -w ~/public_html/php5.fcgi
Note, the name is arbitrary, but it needs to be in cgi-bin. Add the following code:
#!/bin/sh
PHPRC=~/public_html/
export PHPRC
exec /usr/local/cpanel/cgi-sys/php5

Then save, and make it executable:
chmod 755 ~/public_html/php5.fcgi

Next, you need to associate it with .php (and any other you’d like) extensions. Add these two lines to ~/public_html/.htaccess (or any .htaccess file)
AddHandler php5-fastcgi .php
Action php5-fastcgi /cgi-bin/php5.fcgi

You can add additional extensions to the AddHandler line like so:
AddHandler php5-fastcgi .php .phtml .html

Update: The above instructions will break subdomains and addon domains if a separate php5.fcgi is not placed in each sub/addon cgi-bin folder. To combat this, I’m using a system accessible php5.fcgi that sets PHPRC to ~/public_html/. To do the same:
nano -w /usr/local/cpanel/cgi-sys/php5.fcgi
Then add this code:
#!/bin/sh
PHPRC=~/public_html/
export PHPRC
exec /usr/local/cpanel/cgi-sys/php5

Set permissions correctly:
chown root:wheel /usr/local/cpanel/cgi-sys/php5.fcgi
chmod 755 /usr/local/cpanel/cgi-sys/php5.fcgi

Finally, change .htaccess to use the new path:
AddHandler php5-fastcgi .php
Action php5-fastcgi /cgi-sys/php5.fcgi

That’s it. It’s generic, can be copied to any user account without modification, and uses system configs for fcgid options, so it can use sane defaults.

I’ve uploaded my cpanel3-skel folder with perms set in case you would like to use it on your server.  Click here.

Update:
Helius reports in the comments that the steps above can break Fantastico on the server. To correct this, you’ll need to build cPanel’s PHP:
/scripts/makecpphp
Thanks for the info Helius!

ADB over SSH – Fun with port forwards

So last week one of my coworkers and I were discussing a VPN configuration to allow adb shells over 3G. Just tonight I realized that a VPN isn’t even required, SSH can do the same job with relatively little effort. Here’s how I rigged my setup.

Prerequisites:

  • A functional SSH server, accessible from outside your network.
  • Connectbot on your phone with a connection configured for your home computer.
  • A functional ADB over USB install with correct PATH variables set. When you open a new command prompt and type “adb shell” you should see a prompt from your phone.

Configuration of the above items are well documented elsewhere and will be outside the scope of this article.

So, having the prerequisites in order, you start by modifying your Connectbot connection as follows:

  • Long-press your connection in the list and choose “Edit host”.
  • Press “Post-login automation”.
  • In the box type “adb connect localhost:5555[ENTER]” the new line here ([ENTER] key) is necessary to automatically run the command.
  • Press “OK”.
  • Press back to return to the connection list.
  • Long-press your connection in the list and choose “Edit port forwards”.
  • Press Menu, then “Add port forward” and enter the following:
    • Nickname: “ADB”.
    • Type: “Remote”.
    • Source port: “5555”
    • Destination: “localhost:5555”
  • Press “Create port forward”.

Your connection is now configured to tunnel the ADB protocol over port 5555 from your computer to the phone via reverse SSH tunnelling. The next step is to configure ADB on the phone to listen over tcp/ip:

  • On your computer, with the phone connected via USB, type “adb tcpip 5555”.

That’s it! You can now disconnect your USB cable, take your phone on a car ride, etc. If you need to use adb with it, simply use Connectbot to connect to your home computer, after the connection is authenticated (you can use password or public-key) the adb connection will be made automatically, so you simply have to use adb as you normally would!

Remember that over 3G you’re going to have some delay for commands, here’s an example push:


redkrieg@h4xt0p:~$ time adb push Downloads/UniversalAndroot-1.6.2-beta5.apk /sdcard/
11 KB/s (964401 bytes in 83.041s)
redkrieg@h4xt0p:~$ ls -hal Downloads/UniversalAndroot-1.6.2-beta5.apk
-rw-r--r-- 1 redkrieg redkrieg 942K 2010-10-08 15:08 Downloads/UniversalAndroot-1.6.2-beta5.apk
redkrieg@h4xt0p:~$

So yeah, it’s slow as dirt, but I totally pushed a file from my home computer to my phone from across the city.

To undo this and restore normal USB ADB functionality, you must first connect with Connectbot using the instructions above, then from your computer run “adb usb” to return the phone to USB mode. If you’re unable to do this, you’ll need to run the following from a local shell on the phone:


su
setprop service.adb.tcp.port -1
stop adbd
start adbd

That’s it. Have fun!

Update: I’ve just thought things through and realized that this leaves root wide open to remote exploit if someone knows your IP. Probably best to switch back to USB adb once you’re done until I figure out a good way to block non-localhost connections to 5555 in iptables on boot. If you know how to do this, please post a comment, otherwise it’ll be my project for the night.

Update 2:
After some testing, it seems like T-mo blocks inbound connections like this, so iptables shouldn’t be necessary. That being said, the following rules should take care of any lingering doubt:


iptables -A INPUT -p tcp --dport 5555 -s 127.0.0.1 -j ACCEPT
iptables -A INPUT -p tcp --dport 5555 -j DROP