How to insert fake calls into the iPhone call log history

The iPhone Wiki has an article about the call history database, which is outdated at the time of this writing, but does indicate that it’s a SQLite database. My iPhone had this file, which I could open and read rows from with the given schema, but there were no rows!

After some more digging, I found this post in /r/jailbreakdevelopers which reveals that the location and schema of the SQLite database has changed as of iOS 8.3.

My device is the last version of iOS 8 (I am a jailbreak hold-out and missed the window for the iOS 9 update + JB).

Armed with the true location, I could continue to analyze and hopefully modify the database file correctly.

First thing I did was get openssh server running on my iPhone (available through Cydia).

Then I created a directory and downloaded the database directory.

1
scp -r root@172.20.10.2:/var/mobile/Library/CallHistoryDB .

Because I know I will make a lot of mistakes, I initialized a git repo and checked the files in. This way I could use git to reset my get the original, untainted database back at any time.

Next I created this Makefile so I could rapidly iterate, sending the modified database to my phone.

1
2
3
4
5
6
7
8
9
10
all: reset modify push

reset:
git checkout CallHistoryDB/*

push:
scp CallHistoryDB/CallHistory.* root@172.20.10.2:/var/mobile/Library/CallHistoryDB/

modify:
ruby insert.rb

Next I examined the database with a text editor to determine the schema… I extracted it out to a text file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
CREATE TABLE ZCALLRECORD (
Z_PK INTEGER PRIMARY KEY,
Z_ENT INTEGER,
Z_OPT INTEGER,
ZANSWERED INTEGER,
ZCALLTYPE INTEGER,
ZDISCONNECTED_CAUSE INTEGER,
ZFACE_TIME_DATA INTEGER,
ZNUMBER_AVAILABILITY INTEGER,
ZORIGINATED INTEGER,
ZREAD INTEGER,
ZDATE TIMESTAMP,
ZDURATION FLOAT,
ZADDRESS VARCHAR,
ZDEVICE_ID VARCHAR,
ZISO_COUNTRY_CODE VARCHAR,
ZNAME VARCHAR,
ZUNIQUE_ID VARCHAR
)

Next I created my ruby script. Basically I open the database, print the rows, insert a row, then print the rows again.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
require "sqlite3"

db = SQLite3::Database.new "CallHistoryDB/CallHistory.storedata"

db.execute("select * from zcallrecord") do |row|
p row
end

db.execute(%{
INSERT INTO zcallrecord (
Z_ENT,
Z_OPT,
ZANSWERED,
ZCALLTYPE,
ZDISCONNECTED_CAUSE,
ZFACE_TIME_DATA,
ZNUMBER_AVAILABILITY,
ZORIGINATED,
ZREAD,
ZDATE,
ZDURATION,
ZADDRESS,
ZDEVICE_ID,
ZISO_COUNTRY_CODE,
ZNAME,
ZUNIQUE_ID
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)}, [

2, 1, 1, 1, nil, nil, 0, 0, 1, 475200000, 60.0, "+17277531234", nil, "us", nil, "58918CBA-C9C1-479B-8B6D-9DD1FD70E293"
])

db.execute("select * from zcallrecord") do |row|
p row
end

I then ran make until I had all my inputs correct and the call log was modified appropriately. Be sure to close the “Phone App” each time, so that it reads from the database file again.

How to run OpenVPN with TAP and TUN at the same time on Ubuntu 14.04

My last post showed how to setup OpenVPN in TAP mode. Unfortunately, TAP is not supported on iOS (I’m using the official OpenVPN app from the App Store).

This post is a continuation of that post. So we already have a bridge configured (br0) running openvpn in TAP mode. Now we want to add a second listener in TUN mode for iOS. We will reuse the same key (hence we use duplicate-cn option in both server configs)

The OpenVPN side is easy. OpenVPN will scan for .conf files in /etc/openvpn so just:

Rename /etc/openvpn/server.conf to /etc/openvpn/server-tap.conf

Create /etc/openvpn/server-tun.conf with contents like so:

1
port 1190
proto udp
dev tun
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
push "redirect-gateway def1 bypass-dhcp"
push "dhcp-option DNS 8.8.8.8"
push "dhcp-option DNS 208.67.222.222"
duplicate-cn
keepalive 10 120
comp-lzo
user nobody
group nogroup
persist-key
persist-tun
status openvpn-status.log
verb 3
mute 20

Now you just need to configure the linux side.

We want to configure sysctl to make the kernel forward traffic out to the internet.

1
echo 1 > /proc/sys/net/ipv4/ip_forward

Persist this setting by editing /etc/sysctl.conf to uncomment this line:

1
net.ipv4.ip_forward=1

Next up you need to configure the firewall to perform NAT. Typically:

1
2
3
ufw allow ssh
ufw allow 1189/udp # expose the TAP listener
ufw allow 1190/udp # expose the TUN listener

The ufw forwarding policy needs to be set as well. We’ll do this in ufw’s primary configuration file.

1
vim /etc/default/ufw

Look for DEFAULT_FORWARD_POLICY="DROP". This must be changed from DROP to ACCEPT. It should look like this when done:

1
DEFAULT_FORWARD_POLICY="ACCEPT"

Next we will add additional ufw rules for network address translation and IP masquerading of connected clients.

1
vim /etc/ufw/before.rules

Add the following to the top of your before.rules file:

1
2
3
4
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.0.0/8 -o br0 -j MASQUERADE
COMMIT

We are allowing traffic from the openvpn clients to br0, our bridge interface configured previously.

Finally, enable the firewall

1
ufw enable

Your client provide will be pretty much identical to the TAP version. Here’s what it should look like:

1
client
dev tun
proto udp
remote my-server-1 1190
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3
mute 20

Install this on your device. You’re now able to connect using TUN and TAP using a single openvpn server, using the same keys/identities.

Reference

How to setup OpenVPN with TAP bridging on Ubuntu 14.04

I wanted to use Steam’s in-home streaming feature outside of my home. It turns out that you can do this via VPN. OpenVPN is relatively simple to setup in TUN mode, but TAP mode is more complicated due to bridging.

It took gathering information from a few different sources (referenced at the end of this article) to produce an up-to-date tutorial for a TAP-based VPN configuration.

Topology

This is our basic network topology, or rather, the topology we hope to configure towards:

Router & DHCP Server
IP: 192.168.1.1
DHCP Range: 192.168.1.10 to 192.168.1.237

VPN Server

IP: 192.168.1.206 (DHCP Reservation)
VPN Clients IP Range: 192.168.1.238 - 192.168.1.254

Server Setup

Install OpenVPN, bridge tools, and Easy-RSA

1
apt-get update
apt-get install openvpn bridge-utils easy-rsa

Configure Bridged Interface

Although you will see examples of bridge configurations with static addresses defined, this did not work for me. I would not be able to access the outside internet. I looked into the ubuntu wiki on bridging (see references) and discovered a configuration for a simple, dhcp based bridge. This worked best for me. Everything after bridge_ports is from a different TAP tutorial – I don’t know what they do!

1
auto lo
iface lo inet loopback

iface eth0 inet manual

auto br0
iface br0 inet dhcp
        bridge_ports eth0
        bridge_fd 9
        bridge_hello 2
        bridge_maxage 12
        bridge_stp on
        bridge_prio 1000

The simplest way to check this is to reboot shutdown -r now and then test if the outside internet is still accessible ping google.com and to look at the output of ifconfig.

Configure OpenVPN

Extract the example VPN server configuration into /etc/openvpn.

1
gunzip -c /usr/share/doc/openvpn/examples/sample-config-files/server.conf.gz > /etc/openvpn/server.conf

Open the server config, e.g. vim /etc/openvpn/server.conf

Configure the following, yours may be different depending on your topology:

1
port 1189
proto udp
server-bridge 192.168.1.206 255.255.255.0 192.168.1.239 192.168.1.254
dev tap0
ca ca.crt
cert server.crt
key server.key  
dh dh2048.pem
up "/etc/openvpn/up.sh br0"
down "/etc/openvpn/down.sh br0"
ifconfig-pool-persist ipp.txt
keepalive 10 600
comp-lzo
persist-key
persist-tun
verb 3
mute 20
status openvpn-status.log
duplicate-cn

Create the scripts that will execute when the OpenVPN service starts and stops. These scripts add and remove the OpenVPN interface to the servers br0 interface.

/etc/openvpn/down.sh

1
2
3
4
5
6
#!/bin/sh
PATH=/sbin:/usr/sbin:/bin:/usr/bin
BR=$1
DEV=$2
brctl delif $BR $DEV
ip link set "$DEV" down

/etc/openvpn/up.sh

1
2
3
4
5
6
7
8
9
#!/bin/sh
PATH=/sbin:/usr/sbin:/bin:/usr/bin
BR=$1
DEV=$2
MTU=$3
ip link set "$DEV" up promisc on mtu "$MTU"
if ! brctl show $BR | egrep -q "\W+$DEV$"; then
brctl addif $BR $DEV
fi

Make these scripts executable

1
chmod a+x /etc/openvpn/down.sh /etc/openvpn/up.sh

Generate the keys

Copy over the easy-rsa variables file and make the keys directory

1
cp -r /usr/share/easy-rsa/ /etc/openvpn
mkdir /etc/openvpn/easy-rsa/keys

Open up /etc/openvpn/easy-rsa/vars and configure your defaults, e.g.

1
2
3
4
5
6
export KEY_COUNTRY="US"
export KEY_PROVINCE="TX"
export KEY_CITY="Dallas"
export KEY_ORG="My Company Name"
export KEY_EMAIL="sammy@example.com"
export KEY_OU="MYOrganizationalUnit"

You must also set the KEY_NAME="server", the value is referenced by the openvpn config.

Generate the Diffie-Hellman parameters

1
openssl dhparam -out /etc/openvpn/dh2048.pem 2048

Now move to the easy-rsa dir, source the variables, clean the working directory and build everything:

1
2
3
4
5
cd /etc/openvpn/easy-rsa
. ./vars
./clean-all
./build-ca
./build-key-server server

Make sure that you responded positively to the prompts, otherwise the defaults are no and the key creation will not complete.

Next, move the key files over to the openvpn directory

1
cp /etc/openvpn/easy-rsa/keys/{server.crt,server.key,ca.crt} /etc/openvpn

You’re ready to start the server

1
service openvpn start
service openvpn status

If the server is not running, look in /var/log/syslog for errors

Generate Certificates and Keys for Clients

So far we’ve installed and configured the OpenVPN server, created a Certificate Authority, and created the server’s own certificate and key. In this step, we use the server’s CA to generate certificates and keys for each client device which will be connecting to the VPN. These files will later be installed onto the client devices such as a laptop or smartphone.

To create separate authentication credentials for each device you intend to connect to the VPN, you should complete this step for each device, but change the name client1 below to something different such as client2 or iphone2. With separate credentials per device, they can later be deactivated at the server individually, if need be. The remaining examples in this tutorial will use client1 as our example client device’s name.

As we did with the server’s key, now we build one for our client1 example. You should still be working out of /etc/openvpn/easy-rsa.

1
./build-key client1

Again you need to respond positively when presented with yes or no prompts. You should not enter a challenge password.

You can repeat this section again for each client, replacing client1 with the appropriate client name throughout.

The example client configuration file should be copied to the Easy-RSA key directory too. We’ll use it as a template which will be downloaded to client devices for editing. In the copy process, we are changing the name of the example file from client.conf to client.ovpn because the .ovpn file extension is what the clients will expect to use.

1
cp /usr/share/doc/openvpn/examples/sample-config-files/client.conf /etc/openvpn/easy-rsa/keys/client.ovpn

Edit the client profile to reflect your server’s IP address and configure it for tap. Also be sure to replace my-server-1 with your VPN server’s IP or domain name.

/etc/openvpn/easy-rsa/keys/client.ovpn

1
client
dev tap
proto udp
remote my-server-1 1189
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client.crt
key client.key
ns-cert-type server
comp-lzo
verb 3
mute 20

Finally, you can transfer client1.crt, client1.key, client.ovpn, and ca.crt over to your client.

Create and download Tunnelblick Config (Mac only)

1
2
3
4
5
6
7
8
cd /etc/openvpn/easy-rsa/keys
rm -rf my-vpn.tblk
mkdir my-vpn.tblk
cp client1.crt my-vpn.tblk/client.crt
cp client1.key my-vpn.tblk/client.key
cp client.ovpn my-vpn.tblk
cp ca.crt my-vpn.tblk
tar -czf my-vpn.tblk.tar.gz my-vpn.tblk

Now you can scp that over to your Mac, double-click to extract, and then double-click the .tblk file to allow Tunnelblick to install the profile.

Troubleshooting

It connects, I can ping the OpenVPN server’s LAN address, but no internet or other LAN addresses.

Are you running on VMWare VSphere or ESXi? If so you need to configure your switch in promiscuous mode.

VSphere host config tab, networking sidebar VSphere host switch properties Editing VSphere host switch properties to enable promiscuous mode

It connects, I can ping LAN and internet addresses, but DNS isn’t working.

If you manually configure a DNS (e.g. 8.8.8.8), does it work? Then you can configure your openvpn server to push DNS configuration to the clients.

Add a line like this to the openvpn server config:

1
push "dhcp-option DNS 192.168.1.1"

References

Hello hexo goodbye Jekyll

Migrating my posts from Jekyll into Hexo without breaking Hexo’s defaults.

The tumblr stuff is for when i migrated from tumblr to Jekyll. It was never a proper migration. Now everything is properly migrated with this script.

Gist

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
require 'yaml'

def read_post(path)
post = { front_matter: {}, content: nil }
File.open(path) do |src|
front_matter_lines = []
content_lines = []
scanning_front_matter = true
parsing_front_matter = false
src.readlines.each do |line|
if scanning_front_matter and line.strip == "---"
if parsing_front_matter
parsing_front_matter = false
scanning_front_matter = false
else
parsing_front_matter = true
end
else
if parsing_front_matter
front_matter_lines.push(line)
else
content_lines.push(line)
end
end
end
begin
post[:front_matter] = YAML.load(front_matter_lines.join())
rescue
puts path
puts front_matter_lines[0..5].inspect
exit(0)
end
post[:content] = content_lines.join()
end
post
end

def write_post(path, post)
File.open(path, "w") do |dest|
dest.write YAML.dump(post[:front_matter])+"---\n"
dest.write post[:content]
end
puts "Wrote #{path}"
end

def import_post(path, &block)
cap = File.basename(path).match(/^(\d\d\d\d)-(\d\d)-(\d\d)-(.+)$/)
title = File.basename(cap[4].gsub('-', ' ').capitalize, '.*')
post = read_post(path)
write_post("source/_posts/#{title.gsub(' ', '-')}.md", {
content: block_given? ? block.call(post[:content]) : post[:content],
front_matter: {
"date" => cap[1..3].join('-'),
"title" => title,
"tags"=> post[:front_matter]["tags"]
}
})
end

Dir.glob([
"../keyvanfatehi.github.com/_posts/tumblr/*.true",
"../keyvanfatehi.github.com/_posts/*.md",
]).each do |path|
import_post(path) do |body|
body
.gsub("{% include JB/setup %}\n","")
.gsub(/{% highlight ruby %}/,"```ruby")
.gsub(/{% endhighlight %}/,"```")
end
end

Crumple, Inc.: a startup post-mortem

Crumple is the startup idea of an old friend of mine from high school. Knowing that I can develop whatever, he presented me with his pitch and I thought:

Hell yes! You’re right! Paper receipts have got to go! I will gladly build it if you will sell it.

This is an example image

We went to work… My co-founder was responsible for the business aspects, and I was responsible for the technical aspects. We would consult with each other on as much as we could. What could go wrong?

Unfortunately we were far too optimistic about our project. We’d later discover we were so very very wrong.

Sadly, we didn’t realize this until we had build a complete platform and spent quite a bit of [our own] money. Some of the resulting code is worth discussing and releasing, before it falls into the abyss of proprietary abandonware. In addition, the business lessons are valuable and worthy of reflection.

If I didn’t provide a link to source code for a given project, it means I didn’t think it useful or general enough to spend the effort moving the code out of my GitLab server. For the curious, feel free to email me and I am happy to share.

The Software

By September 2015, we’d been at it for 9 months straight – we’d designed, built, and deployed the entirety of the Crumple platform… complete with tests and CI (because that’s how I do things). It was my co-founder’s time to shine on the business front (more on this later)… Until then, let’s look at the software:

VirtualPrinter (ESC/P Parser)

source code

This is one of the first and most challenging things I wrote for Crumple.

It is a JavaScript module that we used for parsing and converting Epson Standard Code escape sequences to HTML receipts on the smartphone.

We also used it in the Terminal under Node.js to determine attributes of a print payload in order to employ rules defined for a given store.

The Terminal

The name we used for our custom software and the physical beaglebone black it runs on. It runs Debian and Node.js to capture receipts and send it to the backend.

It operates as a physical Man in the Middle between the point of sale and the receipt printer. It supports serial (FTDI) and/or the beaglebone’s USB client interface with the ability to masquerade as a printer.

It kept a persistent connection with the backend via WebSocket and sent its logs to papertrail.

It broadcasted a Bluetooth Low Energy signal which enabled the mobile app to receive receipts over the air when in close proximity.

The Workbench

This project provides a command line interface for setting up Terminals from scratch.

This project made it ridiculously easy and automatic to setup new units. The web application would even show the commands to enter (e.g. when creating a new terminal for a store in the web app), making it so all a technician had to do was copy paste the command and run it with a freshly flashed beaglebone.

It was inspired by Ansible in that all the configuration was done idemptotently via SSH, except that in our case, the network was not available so I used the serial debug cable for everything. The technician simply plugged the debug cable in and invoked the CLI and waited for a success message from the web interface, as notified by the Terminal app’s websocket coming online!

The Web App

It provides the admin portal, customer portal, and mobile app API. Of course all the database and pub/sub stuff is here too. We used PostgreSQL’s NOTIFY feature for our PubSub, keeping the stack super lean (i.e. no need for Redis in order to scale horizontally). This worked very well, although I had the opportunity to stress this at scale.

The Mobile App

We used Ionic to implement our Android and iOS app. The thing that was particularly special here was the way we achieved over-the-air receipt transfers.

It was pretty magical to hover the phone over the Tablet and receive your receipt and see it rendered directly onto your smartphone.

The Tablet

An android app that would auto-discover the Terminals on the LAN and bind to them with a WebSocket. It works in a “Kiosk Mode” and was the customer-facing interface to the Terminal.

Users would use this to select how to get their receipt, enter email address, tip and sign (if credit card transaction).

Idle screen Signing screen seen for credit card transactions Enter a tip amount Many choices The pin code was a security compromise Receipt arrives over the air like magic

Business Post Mortem

Anyway, despite the technology, the business failed. My co-founder got burned out and quit and I had full-time classes – plus, from the start I had indicated zero interest in running the business side of things.

To put a point of finality on things, my co-founder wrote a document outlining what went wrong. I’ve quoted it below:

Hardware was not the correct way to implement paperless receipts. Hardware is very expensive and support is very cost intensive. With thousands of customers it would be very expensive to provide support to these stores. A software solution made far more sense (the cost in gas and time for my support visits to g burger and yogurt bar kill our profit margins).

I wish we knew this sooner!

The target market is very difficult to penetrate. SMB’s will spend money on things they absolutely need. Things like square and clover worked for SMB’s because it solved a big problem they had (accepting credit cards and a low price POS system). For the larger retail stores they already offer email receipts via a software solution and they don’t have to deal with problem #3.

I wish we knew this sooner too!

We not only needed to sell to businesses but we also needed to sell to the customers. Even after a store has Crumple, the cashiers or business owners must sell Crumple to customers. This was not an easy task. At both g burger and yogurt bar a very small percentage of customers actually used the Crumple app (less than 1 percent of customers). The reason email receipt is far more common is because nearly everyone has an email address. You don’t need to sell them on an additional step in order to take advantage of the service.

We knew this, but it didn’t deter us from proceeding anyway.

Poor Sales/Distribution Strategy. The person in charge of sales and marketing was completely clueless and had no prior experience. There was no clear plan on how to get Crumple to business owners.

Although he is right, my co-founder is just being self-deprecating here.

No product market fit. Even for the two businesses we did acquire as “customers”, neither truly loved the product. It’s not a product that they can’t live without. We also didn’t achieve product market fit with customers. For those who downloaded the app, very few were repeat users. Most only used it once or never used it at all.

This doesn’t seem like something we could have known until we tried.

We built too fast. We rushed it. With more market research and by surveying business owners we would have realized points 1­5 and potentially saved ourselves time and money. Its very crucial to build something that solves a big problem and that people really want it.

I should have stuck to my guns early on when I initially proved the technology – but I allowed myself to be convinced by my co-founder that we needed “the real product” and “just one more feature” over and over again.

I would say “Lesson Learned” but I have a feeling that this experience requires further analysis before its lessons are fully internalized.

DIY WiFi Garage door opener and status checker

I borrowed my brother’s iPhone in order to capture the video.

Summary

My dad and I made a wifi garage door opener and status checker.

We did this project crazy-fast and without any hang-ups, which was interesting to me because it showed how far along the tools have come.

Hardware

  • Original Raspberry Pi
  • 1x Opto-isolator
  • 1x Reed switch

Embedded Software:

  • OS: Nerves (Linux boot to Erlang)
  • App: Custom Elixir app source

Mobile Software:

  • App: Custom Ionic app source

Problem

Our garage door opener has a physical panel just outside where one can enter a 4 digit pin and open the garage.

After some fairly recent rain storms, the panel has been malfunctioning and fails to work 95% of the time. It’s not a battery issue.

It was expensive to replace that from the manufacturer, and we didn’t want the same faulty product!

We want to know if the garage door is open or closed and be able to open or close it remotely.

Solution

This would take two GPIO pins.

  1. read logic level 1 or 0 based on the garage door being open or closed. a reed switch affixed to the door’s frame, adjecent to a strong magnet on the door, will do the trick.

  2. write logic level 1 to a circuit that simulates a button press on the garage door manual switch, located inside the garage. We used an opto-isolator for this.

Build-out

Embedded

I used Nerves on an old Raspberry Pi.

Nerves makes it possible to create minimal ARM firmware that boots directly into the Erlang virtual machine (BEAM).

Essentially it helps you compile a barebones linux kernel with the init system replaced.

I chose Nerves because I like Elixir and the emphasis placed on making fault-tolerant systems right from the start. I think it’s quite apt for the embedded space.

After making Nerves development easier on Mac, I started the project.

Controlling GPIO pins was easy to do thanks to Elixir ALE.

The core concept is to listen on HTTP and allow read/write on any GPIO pin:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44

defmodule Codelock.Router do
use Plug.Router

plug Plug.Logger
plug Corsica, origins: "*"
plug :match
plug Plug.Parsers, parsers: [:urlencoded, :json], json_decoder: Poison
plug :dispatch

def start_link do
{:ok, _} = Plug.Adapters.Cowboy.http Codelock.Router, []
end

post "/digital_write/:gpio_out/:value" do
gpio_out |> String.to_integer |> digital_write(String.to_integer(value))
send_resp(conn, 200, "{}")
end

post "/digital_read/:gpio_in" do
value = gpio_in |> String.to_integer |> digital_read
send_resp(conn, 200, Poison.encode!(%{ value: value }))
end

match _ do
send_resp(conn, 404, "Not found")
end

defp digital_write(pin, value) do
{:ok, pid} = Gpio.start_link(pin, :output)
Gpio.write(pid, value)
IO.puts "Wrote #{value} to pin #{pin}"
Process.exit(pid, :normal)
value
end

defp digital_read(pin) do
{:ok, pid} = Gpio.start_link(pin, :input)
value = Gpio.read(pid)
IO.puts "Read #{value} from pin #{pin}"
Process.exit(pid, :normal)
value
end
end

Circuit

Credit goes 100% to my dad for all the circuit design and prep downstream of the GPIO pins. If you’d like to replicate this look at the reference schematics for an appropriate voltage reed switch and opto-isolator.

A Normally Open (NO) Reed Switch and a magnet are used as the Garage Door status sensor. The Reed switch is installed on the frame, and the magnet on moving door. When the door is in the closed position, the switch short the PI’s input to ground indicating logic “0,” otherwise that input is in logic “1.” We chose a passive Reed magnetic sensor over hall effect transistor because the latter would have required access to a supply voltage and therefore we would have to run 3 wires from PI to the sensor.

All I had to do was put the board together with the Raspberry Pi:

naked boards pi closeup

Mobile App

Prior experience indicated that Ionic would make the mobile app development portion trivial.

The core concept for the mobile UI here was to list one or more “things” that have state and can be toggled:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
<ion-view view-title="Dashboard">
<ion-content class="padding">
<div class="list card" ng-repeat="thing in things">
<div class="item item-divider" ng-init="thing.init()">{{ thing.label }}</div>
<div class="item item-body">
<div>
State: {{ thing.state }}
</div>
<button class="button button-full button-positive" ng-click="thing.toggle()">
Toggle
</button>
</div>
</div>
</ion-content>
</ion-view>

The core concept for the controller was to provide the list of “things” that know how to fetch state and be toggled:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
$scope.things = [{
label: "Garage Door",
state: "unknown",
init: function() {
var self = this;
var fetchState = function() {
$http.post('http://garage:4000/digital_read/4', {}, httpConfig)
.success(function(res) {
if (res.value === 1) self.state = "Open";
else if (res.value === 0) self.state = "Closed";
})
fetchState();
setInterval(fetchState, 5000);
},
toggle: function() {
$http.post('http://garage:4000/digital_write/18/1', {}, httpConfig).success(function() {
setTimeout(function() {
$http.post('http://garage:4000/digital_write/18/0', {}, httpConfig)
}, 800)
})
}
}];

This code could definitely use improvement (and reveals other problems), but this resulted in a decent app:

ionic app

Installation

After soldering the wires from our opto-isolator into the manual switch, we’ve got this:

manual switch

The last step was hooking up the reed switch to the door. You can see the reed switch in the left of the pic below:

reed switch

You can see the whole thing in action in the video at the top of this post.

Setup go compiler on arm and compile buildkite agent

Notes getting buildkite agent running on BeagleBone/ARM7

Beaglebone Black running Debian

Linux arm-worker 3.8.13 #1 SMP Mon Sep 22 10:22:05 CST 2014 armv7l GNU/Linux

After some failed attempts with default sources and unmaintained PPA’s, I found Dave Cheney’s website where he distributes ARM tarballs of Go: http://dave.cheney.net/unofficial-arm-tarballs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Get mercurial, need it later for some packages
apt-get update
apt-get install mercurial

# Get Go
wget http://dave.cheney.net/paste/go1.4.2.linux-arm~multiarch-armv7-1.tar.gz
sha1sum go1.4.2.linux-arm~multiarch-armv7-1.tar.gz
# should be 607573c55dc89d135c3c9c84bba6ba6095a37a1e

tar -zxvf go1.4.2.linux-arm~multiarch-armv7-1.tar.gz

# Setup your Go installation
export GOROOT=$HOME/go
export PATH=$PATH:$GOROOT/bin

# Setup your GOPATH
export GOPATH="$HOME/Code/go"
export PATH="$HOME/Code/go/bin:$PATH"

# Get godep
go get github.com/tools/godep

# Checkout the code
mkdir -p $GOPATH/src/github.com/buildkite/agent
git clone git@github.com:buildkite/agent.git $GOPATH/src/github.com/buildkite/agent
cd $GOPATH/src/github.com/buildkite/agent
godep get

# Test it
go run *.go start --debug --token "abc123" --bootstrap-script templates/bootstrap.sh --build-path ~

Find listening process on mac os x

Let’s say you have something listening on port 4000, when you hit that port nothing happens, and you can’t start any services on that port because something is using it.

On Mac you can find out what process is using it by executing lsof -i :4000

This command will show you the program, pid, and many other pieces of information you can use to track down and kill the process.

Raspberry pi iphone tether

iPhone Tethering on Raspberry Pi

The instructions here are useful although the current packages in Arch and Debian repositories do not work with iOS 7 (Trust Loop Bug) but it is still a good starting point to understand how this works.

https://wiki.archlinux.org/index.php/IPhone_Tethering

iOS 7 Support

Install libimobiledevice from latest source

In order to get iOS 7 support, we need to compile everything from source. Find the scripts for ArchLinux and Raspbian here: https://gist.github.com/kfatehi/8922430

Usage

Mounting your iPhone

Start usbmuxd: usbmuxd

Create a mount point: mkdir /media/iphone

Mount the device: ifuse /media/iphone

(You can unmount using umount /media/iphone)

You should now be able to view the contents of your iPhone.

Networking

At this point you should reboot so that modules and rules get loaded. After that, I gave up on ArchLinux due to issues getting actual network traffic to go across, so I can’t speak for ArchLinux from herein. However I did have success on Raspbian. You should be able to simply plug in your iPhone and see a new interface come up and be able to ping the outside world. Enjoy!

Npm mirror

UPDATE Feb 2, 2014

Don’t bother with any of this. Use sinopia instead! I’ve prepared a docker image for it as well.


This tutorial is my version of this other tutorial.

First off make sure you have a large enough disk drive. You can find out the remote disk size easy like this:

1
[root@alarmpi ~]# curl http://isaacs.iriscouch.com/registry/
{"db_name":"registry","doc_count":52298,"doc_del_count":4836,"update_seq":861940,"purge_seq":0,"compact_running":false,"disk_size":214735753351,"data_size":174529391409,"instance_start_time":"1387441403828175","disk_format_version":6,"committed_update_seq":861940}

data_size lets us know that at the time of this writing you’ll be safe
with a 250GB SSD, but it’s anyone’s guess when that will be
insufficient. Currently that’s what I’m running.

Install & Configure CouchDB

On Mac OS X

  1. Use Homebrew to install couchdb: brew install couchdb read the
    caveats to make it autostart on reboots
  2. Point your browser to the CouchDB configuration page which should now be available at http://127.0.0.1:5984/_utils/index.html

On a Raspberry Pi w/ Arch Linux

  1. Grab the Arch image from http://www.raspberrypi.org/downloads
  2. Determine the device path for your SD card with df at the terminal.
  3. Unzip and then dd the image to path you found: sudo dd bs=4m if=/path/to/img of=/dev/mysd make sure you point to the card and not
    the partition (e.g. /dev/disk1, not /dev/disk1s1)
  4. Insert the SD card, start the Pi, and ssh into it (root/root)
  5. Format your external drive, ensure it is mountable,
    and that your /etc/fstab is set to
    automount it. Use lsblk -f to discover it. If you need more help check
    this ArchWiki Page
  6. Download some packages: pacman -Sy couchdb for more info see this ArchWiki Page
  7. Edit /etc/couchdb/local.ini and set ;bind_address = 127.0.0.1 to bind_address = 0.0.0.0 if you want to access it from another system
  8. Edit /etc/couchdb/local.ini and under [couchdb] add these 2 lines, per your storage location:
    database_dir = /media/storage/couchdb and view_index_dir = /media/storage/couchdb
  9. Give the couchdb daemon permission to write to your external storage:
    chown couchdb:daemon /media/storage
  10. Setup couchdb to autostart after reboot systemctl enable couchdb
  11. Start couchdb with systemctl start couchdb
  12. Connect to Futon using your-ip:5984/_utils

CouchDB configuration continued

Now you have CouchDB installed and can access Futon and hopefully the
internet.

From Futon click “Configuration” and find secure_rewrites and set it to false

Tell CouchDB to replicate continuously from NPM

Open terminal and enter the following to setup continuous replication
from the official npm registry:

1
curl -X POST http://127.0.0.1:5984/_replicate -d '{"source":"http://isaacs.iriscouch.com/registry/", "target":"registry", "continuous":true, "create_target":true}' -H "Content-Type: application/json"

Tell CouchDB to stop replicating

In case you ever need it:

1
curl -X POST http://127.0.0.1:5984/_replicate -d '{"source":"http://isaacs.iriscouch.com/registry/", "target":"registry", "continuous":true, "create_target":true, "cancel":true}' -H "Content-Type: application/json"

Making sure that it keeps replicating

There’s an unfortunate issue that I’m experiencing with couchdb, but it may
just stop replicating often only after transferring 5-7 GB of
data – a trivial retrigger of the replication with the above command
would cause it to pick up where it left off, so I’ve developed a script.

  1. Install nodejs
  2. npm install npm-replication-watcher
  3. npm install forever
  4. forever npm-replication-watcher/bin/npm-replication-watcher

For more information check out
npm-replication-watcher

Finalizing the installation

This information can also be found here https://github.com/isaacs/npmjs.org. That link will also explain how to tell npm to use your new registry.

  1. git clone git://github.com/isaacs/npmjs.org.git
  2. cd npmjs.org
  3. sudo npm install -g couchapp
  4. npm install couchapp
  5. npm install semver
  6. couchapp push registry/app.js http://localhost:5984/registry
  7. couchapp push www/app.js http://localhost:5984/registry

Testing the installation

  1. npm --registry http://localhost:5984/registry/_design/scratch/_rewrite login
  2. npm --registry http://localhost:5984/registry/_design/scratch/_rewrite search