What the architect says vs. what they mean

Plant tongue firmly in cheek…

What the architect says What they mean
Novel No-one understands it but me
Leverages bleeding-edge technology This project is my guinea pig
Overhaul Migration nightmare
This will synchronise… /* TODO: dev inserts compensation logic here */
Enterprise service bus My consultant friend needs some work
RESTful Y’know… HTTPish
Auto-scaling If the software sucks, we’ll just keep throwing tin at it
May be memory-intensive Uses Java
Mature enterprise solution Ancient and bloated
Without vendor lock-in… … but in practice, you’re not going anywhere
PCI-compliant We’re only considering security because we’re required to
Without buy-in from the rest of the business… You might actually have to murder someone to make this happen
Interim solution Excel spreadsheets that won’t be replaced for two years
SalesForce I am punishing the developers

Adjust, Advance, Specialize or Switch? Why I chose DevOps.

For the past seven years or so, I’ve been a full-stack developer. My career so far, if I’m honest, has not been something I’ve particularly planned. One opportunity has fallen neatly into my lap after the other and I’ve just tended to go with the flow of things. This July, for the first time I found myself with several options open to me at once and I wanted to be more considered in my selections and mindful of my future.

Like many of my former colleagues, I’ve come to realise that web development can be quite limiting. Most of us in the tech industry are driven by mastery, autonomy and a sense of purpose. When one of those things is exhausted, what happens then? Autonomy and a sense of purpose are usually intrinsic to a team or company and in our industry are becoming much more commonplace. A sense of mastery is trickier: it needs to be constantly renewed with new challenges; as a web developer, once you’ve reached a certain level, what can those new challenges be?

Continue reading Adjust, Advance, Specialize or Switch? Why I chose DevOps.

14 user stories every web project needs

Legal requirements can often be overlooked when planning a new project. Unless you’re a corporate behemoth with your own legal team, understanding what’s required is mostly just based on copying other people. Even if you took the time to hire outside counsel and explain your project to them, you’d have to find a specialist. You absolutely should do this and factor it into your budget, but to get a head start, here’s a few basics broken down by legislative area. Note: this is based on applications operating and servicing users in the UK and EU.

Continue reading 14 user stories every web project needs

The first three places I look for XSS bugs in front-end JavaScript

Here’s the first things I look for when looking for security issues in front-end JavaScript. By simply grepping the code for these deficiencies and then tracing calls back, I’ve found XSS bugs in Amazon.com, LinkedIn, Tumblr and Slack.

  1. Unsanitized input in URLs. Anywhere where window.location is read, or popstate is listened to, that’s my first port of call. It’s the quickest, easiest way to get a payload into the application.
  2. Window messaging. window.postMessage and the message event are also easy ways to compromise an application if the origin rules allow. Sharing widgets may be particularly vulnerable in this case. One bug I found permitted any mark-up to be posted to their domain and then rendered without sanitization.
  3. Finally, and this takes a lot longer, places where unsanitized markup is rendered. Starting with popular jQuery methods, $.html, $.append, and friends, then other methods like direct assignment to innerHTML. Depending on the templating language used if one is, checking what variables can be passed to unescaped output. The less powerful templating languages are actually more vulnerable to this as developers are forced to calculate complex output in code and then dump it into the templates.

There’s still a lot to be learned and other bugs to be found by examining the full codebase, but these three starting points are usually the most fruitful. Even if I find nothing, it’s interesting to see how large JavaScript applications are architected by different companies.

So, a learning opportunity, sometimes a reward, and when correctly and responsibly reported, the web gets a little bit safer. Everyone wins!

Serving Logitech Media Server (slimserver / squeezeboxserver) over HTTPS

Note: this will not secure the CLI or other channels of communication between your server and clients, only HTTP

First, we need to stop listening on 9000 without SSL. Edit /etc/default/logitechmediaserver and change the line



SLIMOPTIONS="--httpaddr --httpport 8999".

Then restart the server:

sudo service logitechmediaserver restart

Now set up Apache to handle SSL requests. First install the packages you need:

sudo apt-get install apache2
sudo a2enmod headers proxy proxy_http ssl

Now add a site configuration at /etc/apache2/sites-available/logitechmediaserver.conf:

Listen 9000
NameVirtualHost *:9000

<VirtualHost *:9000>
ServerName your.domain

CustomLog ${APACHE_LOG_DIR}/logitechmediaserver-access.log combined
ErrorLog ${APACHE_LOG_DIR}/logitechmediaserver-error.log
LogLevel warn

SSLEngine on
SSLProtocol All -SSLv2 -SSLv3
SSLHonorCipherOrder On
Header always set Strict-Transport-Security "max-age=63072000; includeSubdomains; preload"
Header always set X-Frame-Options SAMEORIGIN
Header always set X-Content-Type-Options nosniff
# Requires Apache >= 2.4
SSLCompression off
SSLCertificateFile /path/to/your/cert.pem
SSLCertificateKeyFile /path/to/your/key.pem
SSLCertificateChainFile /path/to/your/chain.pem

<Location /html/js-main.html>
Header set Content-Type text/javascript

ProxyPass "/" ""
ProxyPassReverse "/" ""

Note the <Location /> directive – this ensures that despite LMS’ best efforts, its JavaScript is served with the correct Content-Type header which would otherwise block it from running in sensible browsers.

Now just test the configuration and restart Apache:

sudo apache2ctl configtest && sudo apache2ctl restart

You should now be able access the interface as normal on port 9000 but over SSL instead!

Setting up a Raspberry Pi B+ with EmonCMS / EmonHub

I recently got an emonTx v3 (pre-assembled from the OpenEnergyMonitor store) and RFM12Pi. I also separately got a Raspberry Pi B+ and installed / configured the rest of the system without using the pre-installed SD card (mostly because the MicroSD to SD adapter I have is broken).

On the whole I found the wide range of different and conflicting documentation for setting up the Pi rather confusing, particularly with the differences between newer versions of the software. For those reasons I’ve written this step-by-step guide to configuring a Pi with Raspbian installed from scratch. This probably won’t remain valid all that long, but may help as a basis for later versions.

Replace “pi” user with your own (optional)

SSH in or log in at the console using the default Raspbian credentials.
sudo useradd -m -G adm,sudo,dialout,cdrom,audio,video,plugdev,games,users,netdev,input,spi,gpio myuser
sudo passwd myuser

Reconnect or log in as your new user

sudo userdel -r pi

Install prerequisites

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install git vim build-essential htop iotop mysql-server

Configure MySQL database

echo "
CREATE USER 'emoncms'@'localhost' IDENTIFIED BY 'password';
" | mysql -u root -p

Configure email sending (optional)

sudo apt-get install exim4 exim4-config
sudo dpkg-reconfigure exim4-config

Change the hostname from the default (optional)

echo "$NEW_HOSTNAME" | sudo tee /etc/hostname
sudo sed -i "s/raspberrypi/$NEW_HOSTNAME/g" /etc/hosts
sudo shutdown -r now

Install EmonCMS

sudo tee /etc/apt/sources.list.d/emoncms.list << 'EOF'
deb http://emon-repo.s3.amazonaws.com wheezy unstable
sudo apt-get update
sudo apt-get install php-pear redis-server emoncms emoncms-module-event emoncms-module-openbem emoncms-module-sync emoncms-hub
sudo pecl install channel://pecl.php.net/dio-0.0.7
sudo ln -s /etc/init.d/emoncmsHub /etc/init.d/oemgateway
sudo a2ensite emoncms
sudo a2enmod rewrite php5
sudo apache2ctl restart
sudo tee /etc/sysctl.d/overcommit_memory.conf << EOT
vm.overcommit_memory = 1
sudo service redis-server restart

Navigate to http://your_hostname/emoncms & click register

Go to /emoncms/input/api and copy the API key

sudo dpkg-reconfigure emoncms-hub
sudo useradd -Umrs /bin/true -G dialout,tty emon
sudo vim /etc/init.d/emoncmsHub

# Change DAEMONUSER from “pi” to “emon”
sudo service restart emoncmsHub
sudo vim /boot/cmdline.txt
# Remove console=/dev/ttyAM0
sudo vim /etc/inttab
# Comment out respawn:/sbin/getty -L ttyAM0 line at the end of the file

Configure Wifi with WPA2 (optional)

# Keep space before following command - hides it from history
 sudo tee -a /etc/wpa_supplicant/wpa_supplicant.conf << EOT
psk="$( wpa_passphrase 'network_name' 'passphrase')"
sudo vim /etc/network/interfaces
# Change "wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf" to
# "wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf"
# Add "auto wlan0" to the wlan0 section.
# Change "iface wlan0 inet manual" to "iface wlan0 inet dhcp"
# Add "wireless-power off" to the wlan0 section
sudo reboot

Tracking system uptime via SNMP with Cacti

I wanted to track system uptime via SNMP but the existing solutions on the Cacti forums[1][2] & wiki weren’t up to scratch.

This solution I’ve developed available on Github Gist uses existing Device SNMP details (rather than having to re-specify authentication details or tailor scripts to each device) and provides a graph showing the number of days since the SNMP daemon was started (note, SNMP doesn’t necessarily reveal the exact system uptime, it declares sysUpTime to be the time since networking was last started).

Monitoring Draytek Vigor routers via SNMP with Cacti

I have a couple of Draytek Vigor routers and wanted to monitor their characteristics using SNMP and Cacti, but found Cacti’s support for the ADSL-Line MIB somewhat outdated and its supported feature set didn’t match that of the Drayteks.

I stripped it down a bit, added extra graphs & variables and have created an export for others to use. Installation instructions and the files themselves can be found on Github.

Note that these configurations have stripped out some features that may be supported on other routers/modems or other brands.

Configuring SNMPd in a sane way for remote monitoring

I hunted high and low for a decent guide to installing and configuring SNMPD for remote monitoring on Ubuntu / Debian-based systems and only found outdated and incomprehensible cruft, so here are the steps I took to configure it on my devices:

sudo apt-get install snmp snmpd
sudo service snmpd stop

Edit /etc/snmp/snmpd.conf in your favourite editor like so:

  1. Comment out agentAddress udp:
  2. Add the line agentAddress udp:161 (security-conscious users may wish to change the port number here)
  3. Comment out rocommunity public localhost
  4. Comment out rocommunity public default -V systemonly
  5. Comment out rouser authOnlyUser
  6. Set sysLocation
  7. Set sysContact
  8. Edit the disk paths to paths you wish to monitor
  9. Save and quit

Now we add v3 users: one read-only and one read-write (only if needed).

sudo net-snmp-config --create-snmpv3-user -ro -a SHA -x AES

Set the user name to your read-only user name and set passwords (preferably different ones) when prompted.

If you also need a read-write user (most use-cases don’t require this), then create another user with the following command:

sudo net-snmp-config --create-snmpv3-user -a SHA -x AES

Set the user name to your read-write user name and set passwords (preferably different ones) when prompted.

Restart the service

sudo service snmpd start

You’re all set.

Stealing OAuth tokens from the LinkedIn API using meta referrer

Meta referrer is a proposal from WhatWG which is implemented in WebKit and allows us to control the circumstances under which the Referer (sic) header is sent when fetching page resources.

Let it first be said that in my opinion if you’re enforcing any kind of security based on the Referer header, this should be a sanity check at best.

I noticed recently when accessing LinkedIn‘s online help system that an authentication interstitial was shown when first loading the page. I’ve never used the LinkedIn API before so I was curious as to how this was handled over plain HTTP, so cracked open the Chrome developer tools and got to work examining what was happening during this interstitial.

I quickly found a request to a JavaScript file including the API key for the help system which immediately returned an OAuth token for the user.

https://www.linkedin.com/uas/js/userspace?v=0.0.2000-RC1.28251-1405&apiKey=uq9789xsg6uf&credentialsCookie=true&authorize=true&statistics=false (File as it was)

I tried just including this file in a local HTML page.

Screen Shot 2013-07-04 at 09.56.20

A part of the JavaScript was checking window.location.host matched one of a number of predefined hostname masks.

No problem. Remember we can override primitives & their prototypes in JavaScript. Just include this above it.

window.String.prototype._match = window.String.prototype.match; window.String.prototype.match = function (pattern) { return true; };

With this in place, I could now add an extra piece to give me the OAuth token once it had been loaded:

Job done, this now worked on my local machine. Let’s try it on the open web…

Screen Shot 2013-07-04 at 09.56.20

This again? Now it seems that userspace.js is serving up different content.

After some thought, it occurred to me that the only thing different between this request and the load request was the Referer header. The back-end must be doing something “clever” and restricted access by this header. Sure enough, opening the file on its own in a new tab works. Thanks to support for Meta referrer in WebKit, we can get rid of this annoyance just by adding

<meta name="referrer" content="never">

to our HTML file. And it works!

Successful token stealing

So, what did we learn? You shouldn’t trust JavaScript or the Referer header exclusively for any kind of authorization policy.

I reported this to LinkedIn on 3rd July 2013 and they reported it as fixed (by disabling requests without referrers) on 5th July 2013. I received a LinkedIn T-shirt all the way from California for my troubles.