Implementing Link Header Pagination on the Node.js Server

In the past few years, more and more APIs have begun to follow the RFC5988 convention of using the Link header to provide URLs for the next page. We can do this too and it’s quite easy.

Here is a function I recently wrote to do this for the simple case of a big array:

1
2
3
4
5
6
7
8
9
10
11
12
function paginate(sourceList, page, perPage) {
var totalCount = sourceList.length;
var lastPage = Math.floor(totalCount / perPage);
var sliceBegin = page*perPage;
var sliceEnd = sliceBegin+perPage;
var pageList = sourceList.slice(sliceBegin, sliceEnd);
return {
pageData: pageList,
nextPage: page < lastPage ? page+1 : null,
totalCount: totalCount
}
}

To demonstrate the usage, imagine you have defined a function getMovies which provides a movieList array you wish to paginate.
You have an express route /movies which serves as the web API to your movie library.
You might create a paginated route like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
app.get('/movies', function(req, res) {
var pageNum = parseInt(req.query.page || 0);
var perPage = parseInt(req.query.per_page || 50);
getMovies(function(err, movieList) {
if (err) throw err;
var page = paginate(movieList, pageNum, perPage);
if (page.nextPage) {
res.set("Link", "/movies?page="+page.nextPage);
}
res.set("X-Total-Count", movieList.length);
res.json(page.pageData);
});
})

Note that in most cases, you would not be paginating from a big array. This was my first time paginating a fairly large set which was not from a database. In the case of database access, your function won’t be so general since it will depend on using the database API to create an efficient query by offset and limit.

Eavesdropping on your iPhone's network traffic with your Mac and Wireshark

A few days after this writing, a relevant item appeared on HackerNews discussing the use of an HTTP proxy for this purpose, which allows you to see TLS traffic in most circumstances, a shortcoming of my approach here with wireshark. Here is the link. The top comment recommends mitmproxy which looks like the better tool for the job in this case than wireshark! Still it is very good to learn so that you can intercept the traffic when lower level network functions are used directly, although this is becoming quite rare I think.

“Pokemon Go” is a mobile phone game in which little mofos spawn in various places in the real world (on a map) and you have to be within proximity to (a) discover them and (b) “catch” them by throwing a ball at them.

Finding these little mofos is a hassle because you don’t know where the optimal populations might be at any moment and/or you may be looking for a specific type of little mofo. If only you could see all the locations at once!

Someone created https://pokevision.com/ about a week prior to the writing of this article, however it is currently not working. It looks like this, showing you the exact locations and spawn timeouts of the little mofos anywhere:

pokevision.jpg

I believe that pokevision was created by reverse engineering the communication between the mobile app and the backend game server, determining the API, and then using that artificially from the pokevision servers, caching the responses appropriately in a little-mofo-location-database.

If we are to do the same reverse engineering task, and this applies to any traffic on your mobile device (or any device with wifi, but restricted access, a mobile phone being just that), we need to setup a wifi hotspot that we control and monitor.

On macOS, this is very easy. A simple checkbox abstracts away the creation and configuration of a bridge in which your wifi becomes an infrastructure access point and NAT and DHCP are handled for you automatically:

sharing.png

Next up we open wireshark and select the bridge as our capture interface. This allows us to eavesdrop on the iPhone, assuming it is connected to our Mac via wifi.

capture_interfaces.png

Now the packets start flowing in:

cap1.png

Notice the protocol is TLSv2. It’s probably HTTP beneath that encryption layer. Wireshark lets us follow the connection, so the data stream is more readable than just straight packets:

context_follow.png

In this view we can see that we have correctly identified traffic originating from the “Pokemon Go” app, but that a handshake is underway and in order to view anything else, we’d need to decrypt the encryption layer.

following.png

This all took some 20 minutes or so and got us an environment in which at least the ciphertext traffic was available to us, and with the right keys, plaintext-observable. I think that the pokevision team took this to the next level, using an android phone (probably rooted) to harvest the keys required to decrypt the traffic.

Because pokevision was created through reverse engineering, it probably won’t last. This explains why we are seeing this error despite the fact that the “Pokemon Go” app itself is currently operational. If I was niantic (owner of Pokemon Go), I would crack down on pokevision and add an in-app purchase in which the powers of pokevision were temporarily granted to a player.

pokevision_down.png

Restrict process traffic to VPN interface

Say you have configured VPN already and the interface is the default name, tun0.

Say you have Transmission installed and want to force it to work through the VPN only.

If installed on ubuntu, it should run as user debian-transmission which is a convenient handle by which to control its traffic.

Using iptables, we can, for any process whose owner is debian-transmission:

  1. route packets destined to any machine on our LAN (192.168.1.0/24), thus allowing our HTTP client to work
  2. after 1., drop packets only if they travel over any interface other than tun0

Thus:

1
2
iptables -A OUTPUT -m owner --uid-owner debian-transmission -d 192.168.1.0/24 -j ACCEPT
iptables -A OUTPUT -m owner --uid-owner debian-transmission \! -o tun0 -j REJECT

I learned this from http://www.botcyb.org/2012/11/force-application-to-use-vpn-using.html

Persisting Across Reboots

There are a few ways to do this; here is the way I prefer it.

Save your rules off to a file:

sudo sh -c "iptables-save > /etc/iptables.rules"

Add up and/or down hooks to interfaces in /etc/network/interfaces, e.g.:

1
2
3
4
auto eth0
iface eth0 inet dhcp
pre-up iptables-restore < /etc/iptables.rules
post-down iptables-restore < /etc/iptables.downrules

I learned this from https://help.ubuntu.com/community/IptablesHowTo

All these frameworks

So im working on something in node.js with justin (for fun and to
learn). it’s pretty neat i like it a lot for doing web stuff. im using
express.js for the backend.

I also prefer express and can’t seem to shake it as my go-to web “framework” – for me it is currently the quickest way to get a little web service going…

Ive been reading about different front ends, but I’m pretty confused.
Angular and React, do they do pretty much the same thing? The marketing
material makes them sound like very odd frameworks, but what the hell
does this mean? Can you use these without a backend like express (I
imagine you can if you dont want database access)? Or is it all UI stuff
in the front?

I know some of these do sever side rendering (which seems pretty cool)
but does this sort of just replace the templating engine on the express
side (I’m using swig which is just like jinja that django uses)?

Well it is indeed confusing.

So you probably know django is referred to as an “MVC” framework. Rails is also referred to as MVC.

Model view controller… Okay.
In Django/Rails the View is the part that renders a template (usually an HTML) using some templating language. The model brings your data forth for use in the View. The Controller serves this up in reaction/response to a web request.

The first thing to realize when speaking of web front-end frameworks (Which is what angular and react are, with react being slightly less and slightly more all at once [ill explain]) is this: in the above model of MVC, they are strictly working in the browser. This means they are 100% compatible with django/rails, whatever you’re using, because you are just including angular.js or react.js as a static javascript library and then following some convention thereafter to actually use the library to control your UI.

Sometimes people refer to Angular as “MVVM” or “MV*” because as people transitioned from traditional web frameworks like rails/django (request -> server-side template -> response) to SPA’s (Single Page App) they tried to fit this circle into the square peg of the way they were thinking before. It doesn’t really work, and is a shitty and confusing thing, but I only tell you in order to say this: when react came out it said “it is the V” in “MVC” or “MVVM”

This is still misleading though because react is not typically used like a server-side templating library, although it can sorta be (server-side rendering) but this is a pretty advanced usage and I actually still haven’t bothered trying it. It’s not the same as the server-side rendering of a template that we are used to in MVC/traditional frameworks, or maybe it is, who cares, you can read up on it and just know that with react you get it for free and it exists if/when you need to compute the UI state/DOM without the need for an actual DOM.

Just trying to figure out how the pieces of these thigns work together.
I mean I can pretty much use express just like I did django so this
makes 100% theoretical sense. How do these other things fit in?

Angular and React are just javascript libraries.

Angular involves writing html files in a way that binds it to data in a “controller” ( a javascript object that is like an instance of some UI component).

in angular you extend html itself by writing what are called directives. it’s pretty neat but i dont recommend it anymore now that react exists. (note I have not looked at angular 2 yet)

in react, you never write any html. this is useful because you are no longer coupling to the DOM or browser. in fact, the react package on npm knows nothing about HTML or the DOM… to “mount” a react component onto the DOM, you need the react-dom package.

this is what gives way to technology such as React Native, which is becoming highly competitive with ionic (angular-based) in the hybrid-mobile app dev framework arena.

react is a better investment, imo, but it gives you less out of the box…

i recommend this tutorial http://teropa.info/blog/2015/09/10/full-stack-redux-tutorial.html – you will see the functional programming style of react.

that’s the other way to think of them… in angular you are mutating all over the place and maintaining state in these controller instances (each one has a $scope object that has data dangling off it, to which the elements are “bound”). in react you have a very primitive way to manage state, but people tend not to use it, instead it is recommended (by the “flux pattern” of which “redux” is a popular implementation) to store state in a special object central to your app (or component)… known as the store.

the store serves to decouple app state from the view itself – only rendering via parameter passing, like a pure function (the react copmonent, then is a pure function).

mutation of the state occurs by means of actions that are dispatched to the store. the store has a “reducer” you wrote, which knows how to compute the next state based on the incoming action. that new state is then what gives way to the next “frame”, or rendering of the view for the new app state.

Linux on Macbook Air Notes

Goal

I intend to keep Mac OS X on the internal flash drive and run a minimal ubuntu install off a flash drive.

Target

My target device is a Macbook Air 6.2 (sudo dmidecode | grep Name)

The root disk will be installed on a Samsung Ultra Fit 64GB USB3.0 Flash Drive

Base Install

I used https://github.com/hartwork/image-bootstrap from another linux machine to write the initial OS to the flash drive.

Window System

I am using dwm compiled from source (suckless.org).

Troubleshooting (Solved)

No Sound

After installing ALSA I still had no sound, but I was able to fix it by appending the following to /etc/modprobe.d/alsa-base.conf

options snd-hda-intel model=pch position_fix=1

After a reboot, I had sound.

Troubleshooting (Unsolved)

  • No horizontal scrolling. see
  • Keyboard repeat rate too slow within dwm
  • Get 3rd mouse button so I can stop invoking xclip -o directly

How to insert fake calls into the iPhone call log history

The iPhone Wiki has an article about the call history database, which is outdated at the time of this writing, but does indicate that it’s a SQLite database. My iPhone had this file, which I could open and read rows from with the given schema, but there were no rows!

After some more digging, I found this post in /r/jailbreakdevelopers which reveals that the location and schema of the SQLite database has changed as of iOS 8.3.

My device is the last version of iOS 8 (I am a jailbreak hold-out and missed the window for the iOS 9 update + JB).

Armed with the true location, I could continue to analyze and hopefully modify the database file correctly.

First thing I did was get openssh server running on my iPhone (available through Cydia).

Then I created a directory and downloaded the database directory.

1
scp -r root@172.20.10.2:/var/mobile/Library/CallHistoryDB .

Because I know I will make a lot of mistakes, I initialized a git repo and checked the files in. This way I could use git to reset my get the original, untainted database back at any time.

Next I created this Makefile so I could rapidly iterate, sending the modified database to my phone.

1
2
3
4
5
6
7
8
9
10
all: reset modify push
reset:
git checkout CallHistoryDB/*
push:
scp CallHistoryDB/CallHistory.* root@172.20.10.2:/var/mobile/Library/CallHistoryDB/
modify:
ruby insert.rb

Next I examined the database with a text editor to determine the schema… I extracted it out to a text file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
CREATE TABLE ZCALLRECORD (
Z_PK INTEGER PRIMARY KEY,
Z_ENT INTEGER,
Z_OPT INTEGER,
ZANSWERED INTEGER,
ZCALLTYPE INTEGER,
ZDISCONNECTED_CAUSE INTEGER,
ZFACE_TIME_DATA INTEGER,
ZNUMBER_AVAILABILITY INTEGER,
ZORIGINATED INTEGER,
ZREAD INTEGER,
ZDATE TIMESTAMP,
ZDURATION FLOAT,
ZADDRESS VARCHAR,
ZDEVICE_ID VARCHAR,
ZISO_COUNTRY_CODE VARCHAR,
ZNAME VARCHAR,
ZUNIQUE_ID VARCHAR
)

Next I created my ruby script. Basically I open the database, print the rows, insert a row, then print the rows again.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
require "sqlite3"
db = SQLite3::Database.new "CallHistoryDB/CallHistory.storedata"
db.execute("select * from zcallrecord") do |row|
p row
end
db.execute(%{
INSERT INTO zcallrecord (
Z_ENT,
Z_OPT,
ZANSWERED,
ZCALLTYPE,
ZDISCONNECTED_CAUSE,
ZFACE_TIME_DATA,
ZNUMBER_AVAILABILITY,
ZORIGINATED,
ZREAD,
ZDATE,
ZDURATION,
ZADDRESS,
ZDEVICE_ID,
ZISO_COUNTRY_CODE,
ZNAME,
ZUNIQUE_ID
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)}, [
2, 1, 1, 1, nil, nil, 0, 0, 1, 475200000, 60.0, "+17277531234", nil, "us", nil, "58918CBA-C9C1-479B-8B6D-9DD1FD70E293"
])
db.execute("select * from zcallrecord") do |row|
p row
end

I then ran make until I had all my inputs correct and the call log was modified appropriately. Be sure to close the “Phone App” each time, so that it reads from the database file again.

How to run OpenVPN with TAP and TUN at the same time on Ubuntu 14.04

My last post showed how to setup OpenVPN in TAP mode. Unfortunately, TAP is not supported on iOS (I’m using the official OpenVPN app from the App Store).

This post is a continuation of that post. So we already have a bridge configured (br0) running openvpn in TAP mode. Now we want to add a second listener in TUN mode for iOS. We will reuse the same key (hence we use duplicate-cn option in both server configs)

The OpenVPN side is easy. OpenVPN will scan for .conf files in /etc/openvpn so just:

Rename /etc/openvpn/server.conf to /etc/openvpn/server-tap.conf

Create /etc/openvpn/server-tun.conf with contents like so:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
port 1190
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 208.67.222.222"
duplicate-cn
keepalive 10 120
comp-lzo
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3
mute 20

Now you just need to configure the linux side.

We want to configure sysctl to make the kernel forward traffic out to the internet.

1
echo 1 > /proc/sys/net/ipv4/ip_forward

Persist this setting by editing /etc/sysctl.conf to uncomment this line:

1
net.ipv4.ip_forward=1

Next up you need to configure the firewall to perform NAT. Typically:

1
2
3
ufw allow ssh
ufw allow 1189/udp # expose the TAP listener
ufw allow 1190/udp # expose the TUN listener

The ufw forwarding policy needs to be set as well. We’ll do this in ufw’s primary configuration file.

1
vim /etc/default/ufw

Look for DEFAULT_FORWARD_POLICY="DROP". This must be changed from DROP to ACCEPT. It should look like this when done:

1
DEFAULT_FORWARD_POLICY="ACCEPT"

Next we will add additional ufw rules for network address translation and IP masquerading of connected clients.

1
vim /etc/ufw/before.rules

Add the following to the top of your before.rules file:

1
2
3
4
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.0.0/8 -o br0 -j MASQUERADE
COMMIT

We are allowing traffic from the openvpn clients to br0, our bridge interface configured previously.

Finally, enable the firewall

1
ufw enable

Your client provide will be pretty much identical to the TAP version. Here’s what it should look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
client
dev tun
proto udp
remote my-server-1 1190
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3
mute 20

Install this on your device. You’re now able to connect using TUN and TAP using a single openvpn server, using the same keys/identities.

Reference

How to setup OpenVPN with TAP bridging on Ubuntu 14.04

I wanted to use Steam’s in-home streaming feature outside of my home. It turns out that you can do this via VPN. OpenVPN is relatively simple to setup in TUN mode, but TAP mode is more complicated due to bridging.

It took gathering information from a few different sources (referenced at the end of this article) to produce an up-to-date tutorial for a TAP-based VPN configuration.

Topology

This is our basic network topology, or rather, the topology we hope to configure towards:

Router & DHCP Server
IP: 192.168.1.1
DHCP Range: 192.168.1.10 to 192.168.1.237

VPN Server

IP: 192.168.1.206 (DHCP Reservation)
VPN Clients IP Range: 192.168.1.238 - 192.168.1.254

Server Setup

Install OpenVPN, bridge tools, and Easy-RSA

1
2
apt-get update
apt-get install openvpn bridge-utils easy-rsa

Configure Bridged Interface

Although you will see examples of bridge configurations with static addresses defined, this did not work for me. I would not be able to access the outside internet. I looked into the ubuntu wiki on bridging (see references) and discovered a configuration for a simple, dhcp based bridge. This worked best for me. Everything after bridge_ports is from a different TAP tutorial – I don’t know what they do!

1
2
3
4
5
6
7
8
9
10
11
12
13
auto lo
iface lo inet loopback
iface eth0 inet manual
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp on
bridge_prio 1000

The simplest way to check this is to reboot shutdown -r now and then test if the outside internet is still accessible ping google.com and to look at the output of ifconfig.

Configure OpenVPN

Extract the example VPN server configuration into /etc/openvpn.

1
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz > /etc/openvpn/server.conf

Open the server config, e.g. vim /etc/openvpn/server.conf

Configure the following, yours may be different depending on your topology:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
port 1189
proto udp
server-bridge 192.168.1.206 255.255.255.0 192.168.1.239 192.168.1.254
dev tap0
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
up "/etc/openvpn/up.sh br0"
down "/etc/openvpn/down.sh br0"
ifconfig-pool-persist ipp.txt
keepalive 10 600
comp-lzo
persist-key
persist-tun
verb 3
mute 20
status openvpn-status.log
duplicate-cn

Create the scripts that will execute when the OpenVPN service starts and stops. These scripts add and remove the OpenVPN interface to the servers br0 interface.

/etc/openvpn/down.sh

1
2
3
4
5
6
#!/bin/sh
PATH=/sbin:/usr/sbin:/bin:/usr/bin
BR=$1
DEV=$2
brctl delif $BR $DEV
ip link set "$DEV" down

/etc/openvpn/up.sh

1
2
3
4
5
6
7
8
9
#!/bin/sh
PATH=/sbin:/usr/sbin:/bin:/usr/bin
BR=$1
DEV=$2
MTU=$3
ip link set "$DEV" up promisc on mtu "$MTU"
if ! brctl show $BR | egrep -q "\W+$DEV$"; then
brctl addif $BR $DEV
fi

Make these scripts executable

1
chmod a+x /etc/openvpn/down.sh /etc/openvpn/up.sh

Generate the keys

Copy over the easy-rsa variables file and make the keys directory

1
2
cp -r /usr/share/easy-rsa/ /etc/openvpn
mkdir /etc/openvpn/easy-rsa/keys

Open up /etc/openvpn/easy-rsa/vars and configure your defaults, e.g.

1
2
3
4
5
6
export KEY_COUNTRY="US"
export KEY_PROVINCE="TX"
export KEY_CITY="Dallas"
export KEY_ORG="My Company Name"
export KEY_EMAIL="sammy@example.com"
export KEY_OU="MYOrganizationalUnit"

You must also set the KEY_NAME="server", the value is referenced by the openvpn config.

Generate the Diffie-Hellman parameters

1
openssl dhparam -out /etc/openvpn/dh2048.pem 2048

Now move to the easy-rsa dir, source the variables, clean the working directory and build everything:

1
2
3
4
5
cd /etc/openvpn/easy-rsa
. ./vars
./clean-all
./build-ca
./build-key-server server

Make sure that you responded positively to the prompts, otherwise the defaults are no and the key creation will not complete.

Next, move the key files over to the openvpn directory

1
cp /etc/openvpn/easy-rsa/keys/{server.crt,server.key,ca.crt} /etc/openvpn

You’re ready to start the server

1
2
service openvpn start
service openvpn status

If the server is not running, look in /var/log/syslog for errors

Generate Certificates and Keys for Clients

So far we’ve installed and configured the OpenVPN server, created a Certificate Authority, and created the server’s own certificate and key. In this step, we use the server’s CA to generate certificates and keys for each client device which will be connecting to the VPN. These files will later be installed onto the client devices such as a laptop or smartphone.

To create separate authentication credentials for each device you intend to connect to the VPN, you should complete this step for each device, but change the name client1 below to something different such as client2 or iphone2. With separate credentials per device, they can later be deactivated at the server individually, if need be. The remaining examples in this tutorial will use client1 as our example client device’s name.

As we did with the server’s key, now we build one for our client1 example. You should still be working out of /etc/openvpn/easy-rsa.

1
./build-key client1

Again you need to respond positively when presented with yes or no prompts. You should not enter a challenge password.

You can repeat this section again for each client, replacing client1 with the appropriate client name throughout.

The example client configuration file should be copied to the Easy-RSA key directory too. We’ll use it as a template which will be downloaded to client devices for editing. In the copy process, we are changing the name of the example file from client.conf to client.ovpn because the .ovpn file extension is what the clients will expect to use.

1
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/easy-rsa/keys/client.ovpn

Edit the client profile to reflect your server’s IP address and configure it for tap. Also be sure to replace my-server-1 with your VPN server’s IP or domain name.

/etc/openvpn/easy-rsa/keys/client.ovpn

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
client
dev tap
proto udp
remote my-server-1 1189
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3
mute 20

Finally, you can transfer client1.crt, client1.key, client.ovpn, and ca.crt over to your client.

Create and download Tunnelblick Config (Mac only)

1
2
3
4
5
6
7
8
cd /etc/openvpn/easy-rsa/keys
rm -rf my-vpn.tblk
mkdir my-vpn.tblk
cp client1.crt my-vpn.tblk/client.crt
cp client1.key my-vpn.tblk/client.key
cp client.ovpn my-vpn.tblk
cp ca.crt my-vpn.tblk
tar -czf my-vpn.tblk.tar.gz my-vpn.tblk

Now you can scp that over to your Mac, double-click to extract, and then double-click the .tblk file to allow Tunnelblick to install the profile.

Troubleshooting

It connects, I can ping the OpenVPN server’s LAN address, but no internet or other LAN addresses.

Are you running on VMWare VSphere or ESXi? If so you need to configure your switch in promiscuous mode.

VSphere host config tab, networking sidebar VSphere host switch properties Editing VSphere host switch properties to enable promiscuous mode

It connects, I can ping LAN and internet addresses, but DNS isn’t working.

If you manually configure a DNS (e.g. 8.8.8.8), does it work? Then you can configure your openvpn server to push DNS configuration to the clients.

Add a line like this to the openvpn server config:

1
push "dhcp-option DNS 192.168.1.1"

References

Hello hexo goodbye Jekyll

Migrating my posts from Jekyll into Hexo without breaking Hexo’s defaults.

The tumblr stuff is for when i migrated from tumblr to Jekyll. It was never a proper migration. Now everything is properly migrated with this script.

Gist

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
require 'yaml'
def read_post(path)
post = { front_matter: {}, content: nil }
File.open(path) do |src|
front_matter_lines = []
content_lines = []
scanning_front_matter = true
parsing_front_matter = false
src.readlines.each do |line|
if scanning_front_matter and line.strip == "---"
if parsing_front_matter
parsing_front_matter = false
scanning_front_matter = false
else
parsing_front_matter = true
end
else
if parsing_front_matter
front_matter_lines.push(line)
else
content_lines.push(line)
end
end
end
begin
post[:front_matter] = YAML.load(front_matter_lines.join())
rescue
puts path
puts front_matter_lines[0..5].inspect
exit(0)
end
post[:content] = content_lines.join()
end
post
end
def write_post(path, post)
File.open(path, "w") do |dest|
dest.write YAML.dump(post[:front_matter])+"---\n"
dest.write post[:content]
end
puts "Wrote #{path}"
end
def import_post(path, &block)
cap = File.basename(path).match(/^(\d\d\d\d)-(\d\d)-(\d\d)-(.+)$/)
title = File.basename(cap[4].gsub('-', ' ').capitalize, '.*')
post = read_post(path)
write_post("source/_posts/#{title.gsub(' ', '-')}.md", {
content: block_given? ? block.call(post[:content]) : post[:content],
front_matter: {
"date" => cap[1..3].join('-'),
"title" => title,
"tags"=> post[:front_matter]["tags"]
}
})
end
Dir.glob([
"../keyvanfatehi.github.com/_posts/tumblr/*.true",
"../keyvanfatehi.github.com/_posts/*.md",
]).each do |path|
import_post(path) do |body|
body
.gsub("{% include JB/setup %}\n","")
.gsub(/{% highlight ruby %}/,"```ruby")
.gsub(/{% endhighlight %}/,"```")
end
end

Crumple, Inc.: a startup post-mortem

Crumple is the startup idea of an old friend of mine from high school. Knowing that I can develop whatever, he presented me with his pitch and I thought:

Hell yes! You’re right! Paper receipts have got to go! I will gladly build it if you will sell it.

This is an example image

We went to work… My co-founder was responsible for the business aspects, and I was responsible for the technical aspects. We would consult with each other on as much as we could. What could go wrong?

Unfortunately we were far too optimistic about our project. We’d later discover we were so very very wrong.

Sadly, we didn’t realize this until we had build a complete platform and spent quite a bit of [our own] money. Some of the resulting code is worth discussing and releasing, before it falls into the abyss of proprietary abandonware. In addition, the business lessons are valuable and worthy of reflection.

If I didn’t provide a link to source code for a given project, it means I didn’t think it useful or general enough to spend the effort moving the code out of my GitLab server. For the curious, feel free to email me and I am happy to share.

The Software

By September 2015, we’d been at it for 9 months straight – we’d designed, built, and deployed the entirety of the Crumple platform… complete with tests and CI (because that’s how I do things). It was my co-founder’s time to shine on the business front (more on this later)… Until then, let’s look at the software:

VirtualPrinter (ESC/P Parser)

source code

This is one of the first and most challenging things I wrote for Crumple.

It is a JavaScript module that we used for parsing and converting Epson Standard Code escape sequences to HTML receipts on the smartphone.

We also used it in the Terminal under Node.js to determine attributes of a print payload in order to employ rules defined for a given store.

The Terminal

The name we used for our custom software and the physical beaglebone black it runs on. It runs Debian and Node.js to capture receipts and send it to the backend.

It operates as a physical Man in the Middle between the point of sale and the receipt printer. It supports serial (FTDI) and/or the beaglebone’s USB client interface with the ability to masquerade as a printer.

It kept a persistent connection with the backend via WebSocket and sent its logs to papertrail.

It broadcasted a Bluetooth Low Energy signal which enabled the mobile app to receive receipts over the air when in close proximity.

The Workbench

This project provides a command line interface for setting up Terminals from scratch.

This project made it ridiculously easy and automatic to setup new units. The web application would even show the commands to enter (e.g. when creating a new terminal for a store in the web app), making it so all a technician had to do was copy paste the command and run it with a freshly flashed beaglebone.

It was inspired by Ansible in that all the configuration was done idemptotently via SSH, except that in our case, the network was not available so I used the serial debug cable for everything. The technician simply plugged the debug cable in and invoked the CLI and waited for a success message from the web interface, as notified by the Terminal app’s websocket coming online!

The Web App

It provides the admin portal, customer portal, and mobile app API. Of course all the database and pub/sub stuff is here too. We used PostgreSQL’s NOTIFY feature for our PubSub, keeping the stack super lean (i.e. no need for Redis in order to scale horizontally). This worked very well, although I had the opportunity to stress this at scale.

The Mobile App

We used Ionic to implement our Android and iOS app. The thing that was particularly special here was the way we achieved over-the-air receipt transfers.

It was pretty magical to hover the phone over the Tablet and receive your receipt and see it rendered directly onto your smartphone.

The Tablet

An android app that would auto-discover the Terminals on the LAN and bind to them with a WebSocket. It works in a “Kiosk Mode” and was the customer-facing interface to the Terminal.

Users would use this to select how to get their receipt, enter email address, tip and sign (if credit card transaction).

Idle screen Signing screen seen for credit card transactions Enter a tip amount Many choices The pin code was a security compromise Receipt arrives over the air like magic

Business Post Mortem

Anyway, despite the technology, the business failed. My co-founder got burned out and quit and I had full-time classes – plus, from the start I had indicated zero interest in running the business side of things.

To put a point of finality on things, my co-founder wrote a document outlining what went wrong. I’ve quoted it below:

Hardware was not the correct way to implement paperless receipts. Hardware is very expensive and support is very cost intensive. With thousands of customers it would be very expensive to provide support to these stores. A software solution made far more sense (the cost in gas and time for my support visits to g burger and yogurt bar kill our profit margins).

I wish we knew this sooner!

The target market is very difficult to penetrate. SMB’s will spend money on things they absolutely need. Things like square and clover worked for SMB’s because it solved a big problem they had (accepting credit cards and a low price POS system). For the larger retail stores they already offer email receipts via a software solution and they don’t have to deal with problem #3.

I wish we knew this sooner too!

We not only needed to sell to businesses but we also needed to sell to the customers. Even after a store has Crumple, the cashiers or business owners must sell Crumple to customers. This was not an easy task. At both g burger and yogurt bar a very small percentage of customers actually used the Crumple app (less than 1 percent of customers). The reason email receipt is far more common is because nearly everyone has an email address. You don’t need to sell them on an additional step in order to take advantage of the service.

We knew this, but it didn’t deter us from proceeding anyway.

Poor Sales/Distribution Strategy. The person in charge of sales and marketing was completely clueless and had no prior experience. There was no clear plan on how to get Crumple to business owners.

Although he is right, my co-founder is just being self-deprecating here.

No product market fit. Even for the two businesses we did acquire as “customers”, neither truly loved the product. It’s not a product that they can’t live without. We also didn’t achieve product market fit with customers. For those who downloaded the app, very few were repeat users. Most only used it once or never used it at all.

This doesn’t seem like something we could have known until we tried.

We built too fast. We rushed it. With more market research and by surveying business owners we would have realized points 1­5 and potentially saved ourselves time and money. Its very crucial to build something that solves a big problem and that people really want it.

I should have stuck to my guns early on when I initially proved the technology – but I allowed myself to be convinced by my co-founder that we needed “the real product” and “just one more feature” over and over again.

I would say “Lesson Learned” but I have a feeling that this experience requires further analysis before its lessons are fully internalized.