Blog


Troubleshooting IPSec VPNs for Complete and Utter Dumb-asses (posted by dave)

Long awaited IPSec VPN How-to Guide:

  1. Don't have old IPSec/pluto daemons running in the background from deprecated VPN software installs
  2. Update your iptables firewall to allow ESP/UDP-500/UDP-4500 from the current source address of the remote end of the tunnel, and (this is key) not the IP address the server was previously assigned before your VPS was assigned a new IPv4 address over 6 months ago
...and it's just that easy folks. By following this easy two step guide you too can avoid wasting hours wondering why only one side of your IPSec VPN tunnel renegotiates IKE and IPSec SAs.


Neglected But Not Abandoned (posted by dave)

[Yikes, I've neglected to post or contribute anything to this site in almost an entire year!]

The Neglect

A few nights ago I dusted off my keyboard and set about trying to figure how to access my web and mail hosts and thought I'd maybe install some security updates. Surprisingly, I found myself at a command prompt within seconds this time as opposed to the usual hours of forgotten passwords and re-configuring security via console/out-of-band just before I can access my VPS hosts. Lucky for me, the one host in the world with access to these VPS servers (a VM on a workstation on my LAN) actually booted and I remembered the password. I could even ping/SSH to my VPS servers across the VPN (not bad for a year elapsing without any monitoring!).

The much needed TLC

I first did a system-wide package upgrade on my secondary node--everything went smoothly and the host rebooted so I moved on to the primary knowing that if it were to be unable to launch any of the services that it would be easy enough to troubleshoot. As usual, the PostgreSQL DB didn't come back, the mail services didn't start, and none of my Flask apps would launch on their local ports for Nginx, either. One at a time I found the problems and got them back up. The only real PITA was the Flask apps (i.e. this site) which I stubbornly upgraded all PIP dependencies to their latest versions as well. Some weren't backwards compatible and others failed outright. Again, I resolved them one at a time but luckily only had to make minor updates to support non-backwards compatible changes introduced in Flask-Login 0.3.x. Changes are pushed to Github, I'll commit the current PIP dependency list sometime soon.


Facebook and Twitter Login Support (posted by dave)

A quick note that you can now select Facebook and Twitter from the login page for those who don't want to bother creating another account on another site.


Chaos (posted by dave)




Quick Pics (posted by dave)

Here's some of the better shots off my camera roll from the last month or so...


My Son (posted by dave)




Astonishing Default Parameter Value Handling in Python Constructors (posted by dave)

Yikers! It's embarrassing to be bitten by a bug as elementary as this but I figured out why I couldn't reset my password on my Flask apps (read: this site, and my Beer site). I was using a model for the email activation object for new user registrations and password resets that, when instantiated, defaulted it's date_created attribute to datetime.utcnow(). Well, astonishingly enough, that default value is assigned at compile/run time, not invocation. The result? Password resets and new user registration confirmations attempts only suceeded within $ACTIVATION_CODE_VALID_FOR_SECONDS seconds (read: a day) after the server was (re-)started.

It was trivial to fix but the bug speaks volumes about why developers should always challenge themselves to adopt a 'test first' methodology.


Mmm... Fish! (posted by dave)

Steve and I took the girls to the Aquarium while the Mom's enjoyed an afternoon to themselves. 90% of the pictures didn't turn out but here are a few that weren't horrible. I was so impressed that Lexi was able to walk the whole way there, through, and back without any real fuss that I might have caved at the gift shop. At least I talked her down from the $100 stuffed mermaid :)


I Blame Beer (posted by dave)

Here's some suspicious beer related pics I found on my phone after having supposedly taken a break from beer. The frozen one from my kitchen window sill was picked up from a planter box on my front porch were it's lived happily since, I'm guessing, mid-spring, 2014...? Must have since filled with rain water/ice melt and then re-frozen and burst before having been found when I hauled in the chairs and whatnot for winter storage--my favourite!

Anyway, easily enough to create a temp space to store the images on the remote server:

[ariens@vps2 ~ ]# sudo mkdir /root/new_beer

...and then SCP the images over to be posted:

[ariens@vm1 ~ ]# scp /run/user/1000/gvfs/smb-share:server=dave-pc,share=media/Pictures/new_beer/*.jpg root@vps2:~/new_beer/
WP_20141111_20_21_46_Pro__highres.jpg                                                  100%   11MB 412.1KB/s   00:28    
WP_20141112_17_38_58_Pro__highres.jpg                                                  100% 6935KB 495.4KB/s   00:14    
WP_20141112_20_37_42_Pro__highres.jpg                                                  100% 6593KB 470.9KB/s   00:14    
WP_20141114_16_35_13_Pro__highres.jpg                                                  100% 8324KB 462.5KB/s   00:18    
WP_20141115_14_23_54_Pro__highres.jpg                                                  100% 8536KB 449.3KB/s   00:19    

..and then you just need to run the poorly named fs_post.py and you're gtg:

[ariens@vps2 ~ ]# cd /srv/http/www/flask_apps/ariens_www/
[ariens@vps2 ~ ]# ./fs_post.py dave\@ariens.ca "I Blame Beer" /root/new_beer/*.jpg
file /root/new_beer/WP_20141111_20_21_46_Pro__highres.jpg file_path /srv/http/www/flask_apps/ariens_www/app/static/article_attachments/26_1416563472.980323_WP_20141111_20_21_46_Pro__highres.jpg
There was an attached file: 26_1416563472.980323_WP_20141111_20_21_46_Pro__highres.jpg
file /root/new_beer/WP_20141112_17_38_58_Pro__highres.jpg file_path /srv/http/www/flask_apps/ariens_www/app/static/article_attachments/26_1416563473.682254_WP_20141112_17_38_58_Pro__highres.jpg
There was an attached file: 26_1416563473.682254_WP_20141112_17_38_58_Pro__highres.jpg
file /root/new_beer/WP_20141112_20_37_42_Pro__highres.jpg file_path /srv/http/www/flask_apps/ariens_www/app/static/article_attachments/26_1416563473.985218_WP_20141112_20_37_42_Pro__highres.jpg
There was an attached file: 26_1416563473.985218_WP_20141112_20_37_42_Pro__highres.jpg
file /root/new_beer/WP_20141114_16_35_13_Pro__highres.jpg file_path /srv/http/www/flask_apps/ariens_www/app/static/article_attachments/26_1416563474.27814_WP_20141114_16_35_13_Pro__highres.jpg
There was an attached file: 26_1416563474.27814_WP_20141114_16_35_13_Pro__highres.jpg
file /root/new_beer/WP_20141115_14_23_54_Pro__highres.jpg file_path /srv/http/www/flask_apps/ariens_www/app/static/article_attachments/26_1416563474.560134_WP_20141115_14_23_54_Pro__highres.jpg
There was an attached file: 26_1416563474.560134_WP_20141115_14_23_54_Pro__highres.jpg



'Mo Pixels 'Mo Problems (posted by dave)

I've been rocking the Lumia 1020 for about a week now and finally have some real world results to post about regarding the image quality. First, off--a rant. The pictures are wonderful, brilliant, and gigantic--they are however, not very accessible. The two common high-res settings for the phone's camera take snaps in either 5MP + DNG (a raw format), or 5MP + 38MP. I have very little use for the raw images, namely because I have no clue what I'm doing when it comes to editing. That leaves me with the 38MP option for high resolution needs. The problem comes in when you want to access them. They're only available if you connect the Lumia to a PC over USB. This means my primary method of posting pictures (via email on the phone itself) isn't possible. I'm flabbergasted--c'mon Microsoft... WTF?!

Anyway--fine, I'll attach whenever I'm at home or have some downtime at work... No biggie. Of course, then my webmail client didn't allow attachments greater than 2MB... Obviously, nothings just _easy_ any more. Fixed that (perk of hosting your own MTA), then of course, it's Postfix's time to compain about max message size. I had it set to a reasonable 50MB... But a quick check on the folder I was storing this post's pics in showed that they were 260MB... :) I'm not sure I wanted to open my MTA up to mails quite that large, so I needed another option...

I hacked out a quick Python script that let's me point to a bunch of images on the file system and attach them to a new article. It's nothing ground breaking, but its made super quick work for posting images in bulk.

[ariens@vps1 ~ ]# ./fs_post.py -h
usage: fs_post.py [-h] email title file [file ...]

Post an article with local filesystem attachments

positional arguments:
  email       The email address of the poster
  title       The title of the article to post
  file        The file to attach to the article

optional arguments:
  -h, --help  show this help message and exit


The slick part, IMO, was argparse... Since I'm still so new to Python I'm finding new modules all the time that just make sense... I've used Get Opts for about 10 years in other languages and never really thought about why... Or how much more powerful it could have been.

IPsec VPN between my servers and my home network has been less then reliable... I was warned via a 'ipsec verify' audit that I had some undesirable config in place... Here's how I removed it. I'm adding it here mainly because I was too lazy to add it to any boot script or IPsec service init script and I'll likely need it shortly after my next reboot)

[ariens@vps1 ~ ]# for i in `ls /proc/sys/net/ipv4/conf/*/send_redirects`; do echo 0 > $i; done
[ariens@vps1 ~ ]# for i in `ls /proc/sys/net/ipv4/conf/*/accept_redirects`; do echo 0 > $i; done


Here's the first batch of somewhat decent pictures I've been able to take, some from the weekend, Aria's Birthday, and more... Disclaimer--these are just straight off my camera--no editing, cropping, or enhancements of any kind.


Microsoft takes .NET Open Source and Cross-Platform (posted by dave)

Good news, everyone! I just read on /. that MS has announced that .NET is going open source and cross-platform! I am pretty pumped to get back into C# development, seeing as how it's one of my favorite languages :)


A Necessary Evil (posted by dave)

Since updating this site with picture posting ability via e-mail I have been obsessing over the image quality of the pictures that I want to post. I've been finding them embarrassingly poor. The primary reason is that as a long die-hard BlackBerry fan I've spent the last 7 years switching devices ALL THE TIME. I've had almost every model of BB made during that time and found reasons to love them all. There's some that are great at media, others at business/productivity, some that cost a fortune, and some that are really inexpensive. Unfortunately, I've found that the model that's tolerated my shit and abuse the best is the humble Q5. I loved everything about it from the range of available colors to the keyboard, and how much of a basic commodity it was. I never felt worried about it because worst case scenario, there's another one readily available and I'm not out of pocket too bad. Win! I say, unfortunately because the little Q5 also happens to have the camera with the most... er... "opportunity" among the line up. After weeks and weeks (read: an afternoon) of strenuous research and investigation (read: googling "best camera phone 2014") I bought myself a Nokia Lumia 1020.

This post isn't about picking winners or favorites (that post will come after I've used it for a while), it's about stating the importance of having the best tool for the job. I have a new son/daughter arrival imminent and I won't always have access to a super fancy DSLR. Even if I had one, I'd rarely be using it all the time, and I don't ever want to be caught unprepared, ergo--the 1020.

Anyway... my first gripe about the damned thing came shortly after I stopped playing with the camera and tried to configure my personal e-mail. Turns out Windows 8 has a terribly finicky association to SMTP mail protocols, TCP ports, and UI settings. I'll be damned, but I simply cannot configure STARTTLS on TCP 587 on any Windows 8 device.

TL;DR: Since I roll my own MTA I was able to open up 465--but it's dirty and wrong. I hope I'm either missing something (it's late) or that MS fixes this really soon. Anyway, here's how I accommodated the antiquated mail requirements of the 1020:

You'll need to edit your Postfix master.cf:

[ariens@vps1 ~ ]# sudo vim /etc/postfix/master.cf

...and add/un-comment the following (just the highlighted portion):

...
#  -o syslog_name=postfix/submission
#  -o smtpd_sender_restrictions=$mua_sender_restrictions
#  -o smtpd_recipient_restrictions=permit_sasl_authenticated,reject
#  -o milter_macro_daemon_name=ORIGINATING
smtps     inet  n       -       n       -       -       smtpd
   -o syslog_name=postfix/smtps
#  -o smtpd_sasl_auth_enable=yes
#  -o smtpd_reject_unlisted_recipient=no
#  -o smtpd_client_restrictions=$mua_client_restrictions
#  -o smtpd_helo_restrictions=$mua_helo_restrictions
...

[ariens@vps1 ~ ]# sudo systemctl restart postfix

...and if you hose the whole mail server like I did and find the following in your logs:

postfix/master[5309]: fatal: 0.0.0.0:smtps: Servname not supported for ai_socktype

Then it's prolly because your Linux server doesn't even know that mail goes to that port anymore, so you have to define it in your system-wide services config (don't forget to restart Postfix one more time after):

[ariens@vps1 ~ ]# sudo vim /etc/services

...
kpasswd           464/tcp
kpasswd           464/udp
smtps             465/tcp
smtps             465/udp
urd               465/tcp
igmpv3lite        465/udp
...



More Fun w/ Lex (posted by dave)

A few more pics I found on my BlackBerry. Mostly this was an excuse to test out a new Python script I wrote that allows me to e-mail a special address that automatically creates posts on this site. I'll likely post about that in a litle more detail later but for now I'm happy having mostly the same functionality that I had back when I was using Posterous (before it shuttered).


So Long Summer! (posted by dave)

A whirlwind blur is what you call it... Here's a few random pics from my BlackBerry that I'll use to say goodbye to the warm weather and the blink of an eye knows as summer in Canada.


Okay, So I'm Lazy... 'evs! (posted by dave)

Yikers Island! 8 months since my last post--wtf?!

So, there's a slew of _kinda_ new stuff going on that I suppose I could be mentioning. First off, a wee bit of a confession. That whole "Layer Zero" (*hand waving*) web platform was, well... kinda somewhat of a ruse ;)  It's not that I haven't been building it/stuff, 'cuz that's always going on its just that only a fraction of it ever gets polished enough for public consumption. That, and there simply just isn't a platform but that's neither here nor there.

The downfall all started pretty much at the begining. well over a year ago shortly after writing all that PHP that was previously serving up this site. At the time when I was first setting out to get something new built and writing those first few lines of code I only briefly contemplated the tools I was using. Make no mistakes--I definitely made a couple of really good decisions (100% DIY? yuppers! cloud based? great! clever content and configuration management and replication? check! etc...) but then I did something that I felt and knew was dirty and defaulted to using a programming language that I have kept abusing for far to long. It's no different than any other bad habit. It's easy so you keep doing it, There's a guarnateed "gimmie" booty call analogy in there that I won't elaborate on any further, you get it--we've all done it.

So, pretty much as soon as I began I started to find myself thinking about another. I first did a little research on the modern languages and their merits... I had already started experimenting with Ruby (a neccessity of having banged out loads of Puppet and Chef content over the years) but I wasn't diggin' it. I was too intimidated to get into Java... The thought of the JVM and the overhead and the bulk... .Ugh! [***spoler alert*** future Dave get's a new job as a Java dev... Much more on that in a future post] so what did that leave me with? Perl? Self-kill. C#/.NET? By far my favorite but UNIX 'cuz live free or die hard. amirite.

Of course I chose Python.

I'll prolly get into it in excessive detail in a later post but for the warm rice wine of brevity, I saw Flickr pictures of what I think was something called Summer of Code at what I _think_ was a cottage that Moxie Marlinespike's rented and there were pics of (I'm judging) hipster devs working away on laptops and sitting on couches overlooking the ocean. Sold.

I have no clue what language they were using or even what were building, but that didn't matter. I wanted to FEEL that way when I wrote code. I wanted freedom, purity, and beauty--I wanted to be happy doing my favorite thing :) So, I imaged my laptop with Mint (another future post subject candidate), put in ear buds (later upgraded to Blue Tooth cans), bought PyCharm, and never looked back. I made the opposite decision for every knee-jerk reaction I had when presented with a new choice. I used other peoples libraries, OORM, micro frameworks, *EVERYTHING* I could get my hands on to be better, faster, learn more.

Oh, It wasn't all that blissful... "I love having a stupid hot laptop on my lap and sitting for prolonged hours on the couch"--No one, ever. Even the coding was, ..fun. Turns out there was a "kind of a big deal" moment in the Python community when the language hit version 3 and all those awesome libraries I had available weren't always 100% supported in lovely finicky ways that when I was first starting out almost made me question if there was anything still good in this world. But from the other side, I can safely say taking all that on the chin was one of the best things I could have done.

Anyway, TL;DR 'cuz Zzzzzz: https://github.com/ariens/ariens_www


Geek Lifestyle (posted by dave)

Since opening up Fourtitude Brewing Co. with Moose I've been meaning to dive into the Layer Zero web platform code and set it up to power the new brewery's web site. I have been neglecting this for numerous reasons with the most obvious being that we didn't have any real material to publish. That's about to change, however, as Matt has managed to get a proof-of-concept up and running that uses an Ardiuino board, a basic relay, and digital temperature probes to poll and write various temperatures we are interested in. The most obvious use cases for this are custom built fermentation chambers that monitor the temperature for a given batch of soon-to-be beer and turn on or off hot/cold sources to maintain a target temperature however we are also hoping to extend this monitoring to boiling wort and also mashing. We currently have the fermentation chamber temperatures under control nicely with a few STC-1000 either controlling deep freezers (for lagers) or insulated chambers already within in an environment that has an ambient temperature below our target fermentation temperature (most ales) for which we just power a light bulb to keep bumping the temperature up to our target. The problem with this is that obviously we require a dedicated STC-1000 per fermentation chamber ($30 CAD ea.) and we cannot monitor/graph. Hence, the Arduino. This will allow us to emulate a stand-alone STC-1000 for as many pins as we're willing to burn on the Arduino board. We already have the basics of setting a target temperature, adjusting the delta threshold, compressor delay, and polling/setting each value so we're 80% of the way there.

Anyway, back to Layer Zero... Of course, I need a web site to host the recipes and auto-generated temperature graphs, taking this another step further, it's not entirely out of the realm of possibility to build an administrator module to set/query the various fermentation chamber attributes and configurations as well. That's what I've started. The biggest challenge was going in and properly implementing some of the short cuts I made building out this web platform. I essentially re-wrote the login and activation modules, fixed a few bugs, and made sure they were implementation and customer agnostic. This means that I'll shortly be able to release some sort of interface (maybe my.layerzero.ca) where customers can create/manage their own sites. It's essentially just a basic CMS system, but I am forcing myself to do this the hard way and build the multitenancy CMS system from the group up as opposed to just copying the web root of this site and programming specific content that cannot be made public to other potential hosting customers.

The other activities I've been occupying my time with have been home improvements to my WIFI network. For various reasons my WIFI network has been less than 100% reliable. This could either be a result of the majority of my WIFI clients being mobile devices running very bleeding edge beta versions of their operating systems, or the complex routing I have in place on my LAN (150+ static routes over IPsec VPN tunnels to have various types of traffic originate on the Internet in various countries/locations), the complex WPA2 EAP-TLS encryption and authentication that I have in place so clients are identified with signed certificates as opposed to pre-shared keys, or the eclectic AP hardware combination that I have in play (WRT-54G running DD-WRT, Cisco Aironet 1240ag, and a Meraki MR12). Point being, I was in the process of sharing some configurations for a friend and completely shot myself in the foot. She had asked that I provide some of my RADIUS configurations and in the process of compiling them, I noticed some odd log messages on the server. Upon closer examination I saw that a specific client's authentication was constantly being rejected. I poked, I obtained the client's device, poked further, and sure enough--every one of their authentication attempts was failing. The error was vague and I couldn't clue into the root cause immediately. Since I'm a terrible sysadmin, I ignored the specific error messages and turned my attention to upgrading the Free RADIUS package which was already nearly a year old. Turns out, that was a big mistake. My RADIUS server was running on host I call 'auth', which is a little Raspberry PI that provides RADIUS and authoritative DNS for my home network. It's running an ALARM (Arch Linux for ARM) Linux distribution and hasn't been even so much as glanced at since I flashed the SD card and configured it for EAP-TLS almost a year ago. The kicker was that the entire configuration system of the previous version (2.x) was deprecated in the new version (3.x). I tuned, tweaked, re-compiled and re-configured until nearly 2am one work week night (morning) until finally giving up. I reverted to WPA2 Pre-shared key, and went to bed.

The next day I decided to go shopping. After purchasing some TP-LINK TL-WDR3600's I went about upgrading the physical AP infrastructure. The 802.11n based Meraki MR12 is now going to become my guest WIFI SSID network, I'll burn a port on my Juniper SRX210 POE, dedicate a specific VLAN for guests, source NAT the outbound internet and lock down my private LAN.

There's a kicker though. The MR12 has zero ability to perform DHCP services, so it either needs to join the existing LAN w/ the DHCP server or act in bridge-mode, which means that clients will have to statically assign their own IP, subnet, gateway and static routes. This sucks! It means that I now have to deploy a DHCP server on that VLAN for this to be even remotely useful. The kicker? That DHCP server will likely be that Raspberry PI and I'll migrate the existing DHCP, RADIUS and DNS services to an alternative host on my LAN. All this because I started to read some log messages. Such is geek life, I suppose.


Platform Enhancements: Mail and System (also, hello 2014!) (posted by dave)

I received an e-mail today from a friend with the subject "reply if you get this". I smiled before I even read the message since it reminded me of the 90's. She was testing out a new DKIM implementation on her mail server and was hoping the new message signing was being respected by other mail servers. Of course, it's easy to test against the big players (Google, Yahoo, Hotmail) but it's not always so easy to know how those smaller custom MTAs are going to treat the goods. I also got blasted since I wasn't already signing my own e-mails. Most folks will take mail for granted but as I stated months ago "mail servers really like to think everything is spam". DKIM allows messages to be signed with a public key accessible via TXT records published by the domain owner via DNS. It compliments the SPF which allows the domain owner to publish a list of the valid SMTP servers which are allowed send mail on behalf of the users. DKIM goes the one step further by proving the message came from the authorized SMTP by means of cryptographic authentication... Anywho, I delved into the available options for Arch Linux and quickly had OpenDKIM installed, some private/public keys generated, BIND updated, and Postfix's main/master.cf files configure to sign outgoing mails accordingly. Feels beter, no more false negatives, and my users benefit.

[Edit (2014-02-09): Forgot the due credit (not that it's too hard to google)--https://wiki.archlinux.org/index.php/OpenDKIM]

The other task I performed tonight was to perform another full system upgrade of my virtual machines via a `pacman -Syu`. It worked flawlessly on the standby server, so I rolled it on the active. Unfortunately, I totally forgot about the last time I performed this and quickly found my web, database, and mail services offline. The problem was in the format of PostgreSQL's 9.2.4-2's binary data not being compatible with 9.3.2-4's format. To upgrade, you need a copy of the older version's binaries which are, of course, no longer present after a system upgrade. Luckily, Arch keeps around cached packages from anything previously installed. A quick deletion of the new package (pacman -R postgresql), followed by a install of the previously installed package cache (pacman -U /var/cache/pacman/pkg/postgresql-9.2.4-2-x86_64.pkg.tar.xz) with a service enable/start, and we're back in business! Maybe it'd be worth the 20 minutes to do the pgsql stuff properly. Maybe, but that's going to have to wait for another night.


Fresh (posted by dave)

Today I performed a full system upgrade of my Archlinux VPS hosts. The latest update introduced significant changes and bundled all system binaries into /usr/bin. A move that I personally was really glad to see. I never understood the need for Linux systems (or any operating system, for that matter) to maintain multiple binary path directories. While the upgrade was smooth enough I did have to update several AUR packages although they were easy enough upgrades to perform. Once everything was completed I rebooted both VPS hosts and although it wasn't really required it was nice to see everything come back in under 20 seconds and the master/slave services were all in their proper state.

Another tweak I made was to the Dovecot user-space configuration. I added a little snippet of config that auto-creates/susscribes the required "Spam" IMAP directory for all users upon login. I played around with some other sieve scripts as well, and have some e-mail accounts auto-forwarding to my primary account.

Next updates, will for sure be web platform related... I spent 10 hours in a room today watching a DB engineer tune a Cassandra database and it got me thinking... I'll be trying to migrate away from PostgreSQL as soon as I can find some time to get my feet wet.


Customers Migrated, Layer Zero fully Stand-alone (posted by dave)

I had to read my own home page to determine where the heck I had left off. Apparently, it's been quite some time since I've posted an update. The past few weeks have been a crushing combination of putting in 55+ hour weeks at work while struggling to maintain status quo in the homestead. While it's all high tech high life shenanigans it's just so much more enjoyable when it's a personal, blog-able contribution that reminds me of my roots. These past 3 weeks I have completed the required changes for the various mail components of Layer Zero's customers and the new mail solution is humming away swimmingly. At first I was a little blown away by the sheer volume of spam that was now bombarding my clients. Previously, those mails were sheltered by Google's fine mail infrastructure and now they were landing in INBOX directories without challenge. I setup Spamassasin in a jiffy, but the end-result varied little--properly identified spam messages were now flagged, but still enjoying INBOX residence. To counter the nastiness, I had to investigate counter measures. Turns out there's a fantastic little plugin to Dovecot's local delivery agent that implements the Sieve language for controlling such messages. I set up a little global rule to look for the X-Spam-Flag header to be true and dump it into a special "Spam" folder. This was what was needed to keep the last customer facing interface clean and as-expected.

So, the next updates--there's nothing left--are going to be the web platform. New modules, new functionality, image galleries, WOO! (time permitting)


Content Replication Established (posted by dave)

It took quite some time, but I finally have a very stable and reliable mechanism for replicating content within my server cluster. I used lsyncd to monitor file system events (create/delete/update) and trigger csync2 to replicate the changes across the cluster. Currently the cluster consists of my two primary VPS instances and a "master" instance on my home network. As described below, the two server nodes connect to the master over the IPsec VPN tunnel where the master gets updated with every change on either host.

Depending on what content was modified I can trigger various replication patterns accordingly. For example, I am currently running an active/standby Postfix/Dovecot mail server. When the primary gets a new e-mail (or when a message is deleted, marked as read, moved to another folder, etc--turns out there's a storm of activity under the hood of Postfix) the updates get instantly streamed to my master node, then that triggers the replication to the standby. The trick is all how you configure which file system paths you instruct lsyncd to monitor and how you trigger chained csync2 configurations together.

Other types of changes, for example a NginX configuration file change, can happen on any node (master, or either server) and all nodes get replicated in under a second.

The only last part is the database (currently PostgreSQL) replication, and now I'm thinking that perhaps I will explore using a non-rational NoSQL based solution that scales easier and is built to be more cloud friendly. I LOVE PostgreSQL, but if I had to choose between learning something new and sticking with tried and true, it might finally be time for me to say goodbye to the elephant in the room.


VPS Replication Network Established (posted by dave)

Now that the base OS for my web platform is feature complete I have created a snapshot of the VM and instantiated it on my original (East Coast) VPS. Now, both of the VMs are running identical web stacks and the only difference between them are the networking related configurations for the various services. This will make scaling out a breeze since all I need to do is instantiate a new VM using the base image and make some minor configuration updates before it becomes part of the cluster.

Of course, I will still have the issue of replicating the content across the cluster nodes and for that I am in the process of building out a replication network. See, each of my VMs has a local 192.168/16 loopback address which I use to create IPsec VPN tunnels to my home network. I decided to burn a dedicated port on my Juniper SRX at home and create a replication VLAN that will have direct access from my VPS servers. It's got a new dedicated 10.x.y/24 subnet and it's fully accessible from the VMs running in the cluster. Later, I will install and configure csync2 and lsyncd on each of the VMs and also the replication VPS master. I will then update configuration/content on the VPS master and those two utilities will ensure the changes get pushed out to each of the cluster nodes.

I still have the issue of PostgreSQL database replication to solve but we're at least a few days if not a week from tackling that problem. In the meantime, if anyone has any good success stories with PostgreSQL multi-master please drop me a line.


Damn Near Ready To Actually Start Building (posted by dave)

Registration is back online. I completed the configuration of my MTA which means that nearly all the core platform functionality has been implemented. The only thing that remains is the database replication and configuration synchronization. I found the MTA configuration to be really trivial, especially compared to the old days of having to compile and hack everything together manually. My choice was the Postfix MTA configured to store virtual domain/user mail in maildir format accessible via Dovecot IMAP(s) with Roundcube for webmail and Postfix Admin for administration with all domain/customer information being stored in my PostgreSQL database.

First off, Roundcube is absolutely amazing. The interface is stunning and it's as feature rich as most stand-alone desktop mail clients let alone web based clients. While I'll always be partial to Outlook (the consequence of regularly writing 5000+ words a day in work-related e-mail) things have really come a long way since the days of SquirrelMail and Horde and worse yet--my own custom PHP IMAP based webmail client that I could never really get working properly with multi-part attachments and the proper encoding...

I'm very eager to get the DB and configuration replication working--it'll be sweet editing DNS zone files and NginX configuration from a central location and having my maildir stores replicated so I can run multiple MTA instances.

Bottom line--always be building.


Greets From Los Angeles, California (posted by dave)

If you're reading this then the web services for all ariens.ca have propagated to my VPS on the west coast. This new server is a custom install of Archlinux instead of my previous operating system (Ubuntu 12.x). It took me a really long time to finally get the NginX web server running in a chroot jail talking nicely with all the underling PHP and PostgreSQL modules... The biggest challenge is simply finding time, however tonight I logged a solid 2 hours and nearly everything is complete. The only outstanding critical functionality is the MTA to send mail out for registrations/confirmations/etc. I'm not sure how to do that properly in Arch, so instead of just hacking in something that works, I'll be taking down the registration feature temporarily.

After that's complete, then it's time to snapshot the VM, roll the image onto the original East coast VPS, and start to figure out database replication and high availability!


West Coat Server Online (posted by dave)

Over the past two weeks I've had very little time to devote the new web platform so there isn't any new functionality or content. I have, however, managed to beef up my service infrastructure significantly with the addition of a 2nd server located on the west coast. The virtual private server provider is the same as my east coast provider, Corgitech.com however the hosting is located in Los Angeles, California.

The two servers are now providing DNS service to all applications powered by the Layer Zero platform. When I set off to build out the new server I avoided settling for one of the default server images like Ubuntu or CentOS. Instead, I opted for something a little different--I went with Arch Linux. Arch allows me to build the Linux distribution that I want, how I want, and maintain it however I feel works best. Two very excellent components to Arch are Pacman and Systemd. While I'm rather new to this Linux variant, I have found the distributions's online community and supporting documentation to be very much aligned to my style of system administration. The distribution provides proper tools to manage the system and doesn't try and impose any overbearing ideals or constructs. Software gets installed and managed the way the developers intended and that's my favorite part.

It was a little tricky trying to install the operating system through a virtual console using a mounted ISO virtual CD drive as my install medium, but once I realized how to get the BIOS boot partition working properly and had created/formatted an appropriate partition/file system structure, the rest was a breeze.

Once I send my provider a note saying I'm happy with the installation he'll clone the image and I can use it to migrate over my original VM instance and run Arch across the board.

Lastly--the other very cool thing I configured on this second server was another IPsec VPN tunnel from my Juniper SRX security gateway. I have a 192.168/16 addressing scheme for each VPS server and manage them all from wherever I happen to be around the world as if they are local resources. Very cool!


Service Updates (posted by dave)

Over the last few days I've migrated all my sites/domains from 3rd party hosting services to my new server and online platform powered by Layer Zero. There aren't many sites, mainly just a few that I helped set up for others as well as one or two that I run myself. Previously, I was leveraging a really amazing shared hosting provider Charlottez Web. The company is run by a fellow named Jason and I was a loyal customer for several (5+ ?) years. Even though I had my domain MX records pointed at Google Apps (I got in while the service was free, and Google has grandfathered the accounts), and my WWW 'cname' and A records were pointing to the now extinct Posterous, I was always able to get really quick turn-around time and support from Jason and I happily recommended his service to others.

In addition to migrating everything over, I've made some minor tweaks to the message sub-system on this site, too. A new processing engine for errors, warnings, notices, and success messages has been built into the session handler. There's also now a 'Logout' link for those who sign in, and other little things like hiding the Login/Register pages for logged in users have all been implemented.

Just as I migrate away from my 3rd party hosting company I read that there's a brand new Apache Vulnerability affecting shared hosting/cPanel implementations. When you couple that with the recent slew of Compromised Wordpress implementations and it's easier to explain to others why I've decided to get back into custom web platform arena.


Slowly and Surely (posted by dave)

Just a quick update 'cuz I'm exhausted--all the links are functioning! Login, registration, and the e-mail activation are all working. If you notice anything odd, please share your feedback with me. The forms are all plain, no XML HTTP Request/Ajax hotness (yet), and very little styling. Once I get some more cycles, I'll make it look a little prettier.


Parent Company Founded! (posted by dave)

Today it dawned on me that I'm going through the hassle of building this amazingly powerful web site which I ultimately intend on opening up access to and I don't have a parent site to advertise it from. I decided to register a new domain and spin up this parent site under the name Layer Zero. As of this writing that link isn't working, but I'm hoping that the magic ghosts of Internet propagation will be casting their voodoo shortly. After buying the domain I had to create two name server records with the registrar, one I created using this servers public IP and the other I used my home/residential public IP from my ISP. On both I had to install and configure Bind and add the new zone file. On the VPS I had to instruct it to listen on all interfaces and permit UDP/TCP 53 through IP tables. ON my home network, it was a little trickier as I had to pick a good server to deploy Bind to, and then NAT the DNS traffic through my Juniper SRX security gateway. Both DNS servers point the web traffic for the new site to my VPS where I created a new virtual server on NginX.

Of course, I'm assuming that when I wake up everything will work :)

[Edit: It does]


User Registration and Activation Nearly Ready (posted by dave)

The code's written and the register and logn pages are going to be launched in the next day or so. The tricky bits were doing it right... Using randomly generated hexadecimal IDs for this site, and the users and ensuring they are strong and unique. Of course, I had to account for weak/crappy passwords and usernames and e-mail addresses that were already in use or reserved. Turns out a lot has changed in security since I built my last site, so my passwords are all now hashed using blowfish encryption algorithms with strong salts. And, of course to ensure users were real, I had to implement an e-mail based 'activate your account' type system which means standing up an SMTP server, creating MX records for my site's FQDN, and getting the VPS provider to get his data center/ISP to add some reverse records for the IP (mail servers really like to think everything is spam). Once the keys were generated I had to ensure the activation was strong as well and that there was an expiration on the generated codes, too.

These are all the reasons why most people go 'off-the-shelf' and are generally ignorant to the underlying security. Even the easy stuff is complicated, but for me the adage "if it's worth doing, then it's worth doing right" really applies.


Login and Registration Coming Soon (posted by dave)

User registration and login from Facebook and Twitter nearly 1/3 complete.


Hello World (again) (posted by dave)

So, I couldn't do it. I took down the old/ugly layout and I'm sticking with this (for now). It's a lot cleaner and will give me more opportunity to scale the various widgets that'll eventually get built around the interface. For now, there's not going to be much content at all. I'll likely go back and massage my old layout's content into this one but I won't have time for a little while.

Laurie and I recently attended Dave and Rhodora's wedding and I'm sure there'll be some good pictures of the wedding and stories of the night out to post soon.

For now, this is it!