tech blog

June 21, 2019

Over the last several weeks, I ran into 5 - 6 very thorny problems. Let's see if I can count them. About all I'm good for at this moment is writing gripy blog posts, if that.

My June 12 entry refers you to the drag and drop problem and the hard refresh problem. Those are 2 of the problems.

I just wrote an article on networking bridging and using MITM (man in the middle) "attacks" / monitoring. Getting both of those to work was a pain. The bridging took forever because the routing table kept getting messed up. The MITM took forever because it took me a lot of searching to find the necessity for the ebtables commands.

After I solved the Firefox problems mentioned on June 12, I ran into another one. The whole point of my "exercise" for calendar months (weeks of billable time) was to rewrite the lawyer ERP timecards such that they loaded many times faster. They were taking 8 seconds to load, and *I* did not write that code.

Load time was instant on my machine. Everything was good until I uploaded the timecard to the Amazon nano-instance. Then the timecards took 30 - 45 seconds to load. The CPU was pegged that whole time. So, I'm thinking, my personal dev machine is relatively fast. The nano instance is, well, nano. So, I figured, "More cowbell!". At a micro-instance, RAM goes from 0.5 GB to 1GB. That appeared to be enough to keep the swap space usage to near zero. No help. Small--nope: no noticable change. At medium, CPUs go from 1 to 2. Still no change. I got up to the one that costs ~33 cents an hour--one of the 2xlarge models with 8 CPUs. Still no change. WTF!?!

I had started to consider the next generation of machines with NVMe (PCI SSDs). My dev machine has NVMe, so maybe that's part of the problem. However, iotop didn't show any thrashing. It was purely a CPU problem.

So, upon further thought, it was time to go to the MySQL ("general") query log. The timecard load was so slow that I figured I might see the query hang in real time. Boy, did I ever! I found one query that was solely responsible. It took 0.13s on my machine and 46s on an AWS nano (and much more powerful). That's 354x.

The good news was that I wrote the query, so I should be able to fix it, and it wasn't embedded hopelessly in 50 layers of Drupal feces. (I did not choose Drupal. I sometimes wish I had either passed on the project or seized power very early in my involvement. My ranting on CMSs will come one day.)

I thought I isolated which join was causing trouble by taking query elements in and out. I tried some indexes. Then I looked at the explain plan. It's been a long time since I've looked at an explain plan, but I didn't see anything wrong.

My immediate solution was to take out the sub-feature that needed the query. That's fine with my client for another week or two. Upon yet more thought, I should be able to solve this easily by using my tables rather than Drupal tables. I've written lots of my own tables to avoid Drupal feces. It turns out that using my tables is a slightly more accurate solution to the problem anyhow.

One of the major benefits of using AWS is that my dev machine and the live instance are very close to identical in terms of OS version, application versions, etc. So this is an interesting example of an exponential effect--change the performance characteristics of the hardware just a bit, and your query might go over the cliff.

I guess it's only 5 problems. It seemed like more.

June 12, 2019 - a week in the life

I created a new page on some of my recent frustrations--frustrations more than achievements. We'll call it "a week in the life." I thought browser differences were so 2000s or 200ns (2000 - 2009).

March 9, 2018 - upgrading MongoDB in Ubuntu 17.10

This started with the following error in mongodump:

Failed: error dumping metadata: error converting index (<nil>): conversion of BSON value '2' of type 'bson.Decimal128' not supported

Here is my long-winded solution.

March 8, 2018 - anti-Objectivist web applications

I was just sending a message on a not-to-be-named website, and I discovered that it was eliminating the prefix "object" as in "objective" and "objection." It turned those words into "ive" and "ion." Of course, it did it on the server side, silently, such that I only noticed it when I read my already-sent message. The good news is that the system let me change my message even though it's already sent. I changed the words to "tangible" and "concern."

I have been teaching my apprentice about SQL injection and what I call the "Irish test": Does your database accept "O'Reilly" and other Irish names? This is also a very partial indication that you are preventing SQL injection. Coincidentally, I emailed a version of this entry to someone with such an Irish name. So far, sending him email hasn't crashed GMail. They probably use Mongo, though.

If you haven't guessed, what's happening in this case is eliminating "object" because it might be some sort of relative to SQL injection. I thought I've seen evidence that the site is written in PHP, but, now that I look again, I'm not as sure. This is knowable, but I don't care that much. I don't think "object" is a keyword in either PHP or JavaScript. (Yes, I suppose I should know that, too, but what If I chased down every little piece of trivia?!) In any event, someone obviously got a bit overzealous, no matter what the language.

I will once again posit to my apprentice that I don't make this stuff up.

The final word on SQL injection is, of course, this XKCD comic. I must always warn that I am diametrically opposed to some things Munroe has said in his comic. I would hope he goes in the category of a public figure, and thus I can call him an idiot-savant. Then again, he more or less calls himself that about every 3rd comic. He's obviously a genius in many ways, but he epically misses some stuff. One day, this tech blog might go way beyond tech, but I'm just not quite there yet, so I'm not going to start exhaustively fussing at Randall.

Mar 1, 2018 - LetsEncrypt / certbot renewal

This is the command for renewing an SSL cert "early":

sudo certbot renew --renew-by-default

Without the --renew-by-default flag, I can't seem to quickly figure out what it considers "due for renewal." Without the flag, you'll get this:

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/[domain name]/fullchain.pem (skipped)
No renewals were attempted.

I should have the rate limits / usage quotas under "rate limits."

An update, moments after I posted this: the 3 week renewal emails are for the "staging" / practice / sandbox certs, not the live / real ones. I wonder when or if I'd get the live email? Also, I won't create staging certs again, so those won't help remind me of the live renewals again. I'll put it on my calendar--I'm not relying on an email--but still somewhat odd.

The email goes to your address in your /etc/letsencrypt/.../regr.json file, NOT the Apache config. I say ... because the path varies so much. grep -iR [addr] will find it.

Feb 2, 2018 - base62

Random base64 characters for passwords and such annoy me because + and / will often break a "word"--it's hard to copy and paste the string, depending on the context. Thus, I present base62: the base64 characters minus + and /. I considered commentary, but perhaps I'll leave that as the infamous "exercise to the reader." However, I do have a smidgen of commentary below.


Assuming you call the file base62.php, give it exe permission, and execute from the Linux command prompt:

./base62.php 50


./base62.php 1000 | grep -P [ANZanz059]

That's my validation that I see the start, end, and midpoints of my 3 sets (arrays) of characters.


In the event that Google doesn't look inside the textarea, UQID: VMbAlZQ13ojI. That was generated with my brand new scriptlet. So far that string is not indexed by Google. UQID as in unique ID. Or temporarily globally unique ID. Or currently Google unique ID (GOUID?). Presumably it isn't big enough to be unique forever. 62^12 = 3 X 10^21. That's big but not astronomical. :)

somewhat-to-irrelevant commentary

What can I say? Sometimes I amuse myself. Ok. My structure is on the obtuse side. I couldn't help it. I usually don't write stuff like that. Perhaps Mr. 4.6 or one of my more recent contacts can write the clearer version. I actually did write clearer versions, but, then, I couldn't help myself.

further exercise to the reader

Perhaps someone will turn this into a web app? Complete with nice input tags and HTML5 increase and decrease integer arrows and an option to force SSL / TLS and AJAX.


sudo cp base62.php /usr/bin
cd /usr/bin
ln -s ./base62.php base62
cd /tmp
[output =] RyH3HjGnEalr71meSJfm

Now it's part of my system. I changed to /tmp to make sure that . PATH wasn't an issue--that it was really installed.


Jan 28, 2018 - Stratego / Probe

I'd like to recommend Imersatz' Stratego board game implementation called Probe. It is the 3 time AI Stratego champion. The AI plays against you. It's a free download; see the "Download" link on that page. From a human who is good at the game's point of view, I would call it quasi-intelligent, but it beats me maybe 1 / 7 times, so it's entertaining.

I am running the game through WINE, the Windows Emulator for Linux. I just downloaded it to make sure it matches what I downloaded to this new-to-me computer months ago. It does. Below I give various specs. Those are to make sure you have the same thing I do. It hasn't eaten my computer or done anything bad. I have no reason to think it's anything but what it says it is. In other words, I am recommending it as non-malware and fun. If it makes you feel any better, you can see this page in secure HTTP.

Probe2300.exe [the download file]
19007955 bytes
or 19,007,955 bytes / ca. 19MB
SHA512(Probe2300.exe)= e96f5ee67653eee1677eb392c49d2f295806860ff871f00fb3b0989894e30474119d462c25b3ac310458cec6f0c551304dd2aa2428d89f314b1b19a2a4fecf82
SHA256(Probe2300.exe)= ee632bcd2fcfc2c2d3a4f568d06499f5903d9cc03ef511f3755c6b5f8454c709

The above is the download file from Imersatz. In the probe exe directory, I get:

1860608 [bytes] Feb 28  2013 Probe.exe
 800611         Feb 28  2013 Probe.chm
1291264         Feb 28  2013 ProbeAI.dll

SHA256(ProbeAI.dll)= 13e862846c4f905d3d90bb07b17b63c915224f5a8c1284ce5534bffcf979537a
SHA256(Probe.chm)= 3b7be4e7933eee5d740e748a63ea0b0216e42c74a454337affc4128a4461ea6b
SHA256(Probe.exe)= 656f31d546406760cb466fcb3760957367e234e2e98e76c30482a2bbb72b0232

Jan 14, 2018 - grudgingly dealing with Mac (wifi installation)

The first time Mr. 4.6 installed Ubuntu Linux (17.10 - Artful Aardvark) on his Mac laptop (MacBook Pro?), wifi worked fine "out of the box." I think that's because he was installing Linux via wifi. This time, he used ethernet, and wifi wasn't recognized--no icon, no sign of a driver. Because he was using ethernet, maybe the installer didn't look for wifi? Maybe he didn't "install 3rd party tools"? (I asked him about that, but he was busy being excited that we fixed it. I'll try to remember to ask again.) There were good suggestions on how to fix it out there, but I derived the simplest one:

sudo apt-get install bcmwl-kernel-source

He didn't even have to reboot. His wifi icon just appeared.

For the record, that's "Broadcom 802.11 [wifi] Linux STA wireless driver source."

Thanks to Christopher Berner who got me very close. He was suggesting a series of Debian packages, but the above command installed everything in one swoop.

There are a few questions I have for 4.6 about this. Hopefully I'll get answers tomorrow or later.

Jan 3, 2018

JavaScript drag and drop

I created a JavaScript drag and drop example. I may have done it in JQuery a handful of times, but I don't remember for sure. This is a "raw" JS version--no JQuery or other libraries. I've been thinking about writing a to do list organizer which would use drag and drop. Also, I might use it professionally soon.

new-to-HTML5 semantic elements / tags

Last night, my apprentice Mr. 4.6 showed me these new HTML5 elements / tags. I remember years ago looking for a list of everything that is new in HTML5. I suspect I've at least heard of 75% of it from searching on various stuff, but I did not know about some of those tags. I would hope there is good list by now. Maybe I'll look again or 4.6 will find one.

Dec 24, 2017 - remote MongoDB connections through Robo 3T / ssh port forwarding

A new trick to my Linux book:

ssh -L 27019: ubuntu@kwynn.com -i ./*.pem

That forwards local port 27019 to kwynn.com's 27017 (MongoDB), but from kwynn.com's perspective 27017 is a local port ( / localhost). Thus, I can connect through Robo 3T ("the hard way" / see below) to MongoDB on Kwynn.com without opening up 27017 to the world. In Robo 3T I just treat it like a local connection except 27019. (There is nothing special about 27019. Make it what you want. Thanks to Gökhan Şimşek who gave me this idea / solution / technique in this comment. )

I used this because I am suffering from a variant of the ssh tunneling bug in 3T 1.1. (I solved it. See below.) I think I have a different problem than most report, though. Most people seem to have a problem with encryption. I'm not having that problem because this is what tail -f /var/log/auth.log shows:

I suspect the Deprecated stuff is irrelevant:

Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 16: Deprecated option UsePrivilegeSeparation
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 19: Deprecated option KeyRegenerationInterval
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 20: Deprecated option ServerKeyBits
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 31: Deprecated option RSAAuthentication
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 38: Deprecated option RhostsRSAAuthentication
Dec 24 00:11:12 kwynn.com sshd[18675]: reprocess config line 31: Deprecated option RSAAuthentication
Dec 24 00:11:12 kwynn.com sshd[18675]: reprocess config line 38: Deprecated option RhostsRSAAuthentication
[end deprecated]

Dec 24 00:11:12 kwynn.com sshd[18675]: Accepted publickey for ubuntu from [my local IP address] port 50448 ssh2: RSA SHA256:[30-40 base64 characters]
Dec 24 00:11:12 kwynn.com sshd[18675]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
Dec 24 00:11:12 kwynn.com systemd-logind[960]: New session 284 of user ubuntu.
Dec 24 00:11:12 kwynn.com sshd[18729]: error: connect_to kwynn.com port 27017: failed.
Dec 24 00:11:12 kwynn.com sshd[18729]: Received disconnect from [my local IP address] port 50448:11: Client disconnecting normally
Dec 24 00:11:12 kwynn.com sshd[18729]: Disconnected from user ubuntu [my local IP address] port 50448
Dec 24 00:11:12 kwynn.com sshd[18675]: pam_unix(sshd:session): session closed for user ubuntu
Dec 24 00:11:12 kwynn.com systemd-logind[960]: Removed session 284.

For the record, the error I get is "Cannot establish SSH tunnel (kwynn.com:22). / Error: Resource temporarily unavailable. Failed to create SSH channel. (Error #11)."

This doesn't seem to be an encryption problem, though, because my request is clearly accepted. MongoDB is bonded to connections only--but this shouldn't be a problem because based on traceroute my system knows that IT is kwynn.com (It "knows" this in /etc/hosts). It doesn't try routing packets outside the machine.

On the other hand, this won't work in the sense that 3T won't connect:

ssh -L 27019:kwynn.com:27017 ubuntu@kwynn.com -i ./*.pem


Huh. I just fixed my problem. If I put kwynn.com in /etc/hosts as then 3T won't work through "manual" ssh forwarding (like my command above), even if I forward as If I put kwynn.com in /etc/hosts as, 3T works 3 ways: either through the above ( OR this:

ssh -L 27019:kwynn.com:27017 ubuntu@kwynn.com -i ./*.pem

AND 3T works without my "manual," command ssh port forwarding, through it's own ssh tunnel feature, which solves my original problem. However, I'm glad I learned about ssh port forwarding.

I need to figure out what the difference is between and 0.1. AWS puts the original "name" of the computer in /etc/hosts as by default, and I just read instructions to use Oh well, for another time...

December 21, 2017 - kwynn.com has its first SSL cert, Mongo continued

I'm starting to write around 11:08pm. I'll probably post this to test the link just below, then I should write more.


Kwynn.com has its first SSL certificate. You can now read this entry or anything else on my site through TLS / SSL. I have not forced SSL, though: there's no automatic redirect or rewrite.

I remember years ago (2007 - 2009??), a group was trying to create a free-as-in-speech-and-beer certificate authority (CA). Now it's done, I've used it, and it's pretty dang cool. Here are some quick tips:

my ssl.conf

Rather than letting certbot mess with your .conf, it should look something like the following. Once the 3 /etc/letsencrypt files have populated with certbot ... certonly, then you're safe to restart Apache.

I included ErrorLog and CustomLog commands to make sure SSL traffic went to the same place as non-SSL traffic.

<VirtualHost *:443>

	ServerName kwynn.com
	ServerAdmin myemail@example.com

	DocumentRoot /blah
	<Directory /blah>
		Require ssl

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

SSLEngine  on
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/kwynn.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/kwynn.com/privkey.pem

That does NOT force a user to use SSL. "Require" only applies to 443, not 80. If you want to selectively force SSL in PHP (before using cookies, for example), do something like this:

    if (!$_SERVER['HTTPS'] || $_SERVER['HTTPS'] !== 'on') {
		header('Location: https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI']);

As a critique of the above, perhaps the first term should be (!isset($_SERVER['HTTPS']) but what I have above gets rid of the warning in the Apache error log. I'll try to remember to test this and fix it later.

MongoDB continued -- partial SSL

I started to secure MongoDB with SSL / TLS, but then I noticed the Robo 3T option to use an SSH tunnel. Since one accesses AWS EC2 through an ssh tunnel anyhow, and I want access only for me, there is no need to open MongoDB to the internet. I'd already learned a few things, though, so I'll share them. Note that this is not fully secured because I had not used Let's Encrypt or any other CA yet, and I'm skipping other checks as you'll see. I was just trying to get the minimum to work before I realized I didn't need to continue down this path. See Configure mongod and mongos for TLS/SSL.

cd /etc/ssl/
openssl req -newkey rsa:8096 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key
cat mongodb-cert.key mongodb-cert.crt > mongodb.pem

Then set up the config file as such:

cat /etc/mongodb.conf

  dbPath: /var/lib/mongodb
    enabled: true

  logAppend: true

  port:   27017
    mode: requireSSL
    PEMKeyFile: /etc/ssl/mongodb.pem

Then the NOT-fully-secure PHP part:


$ctx = stream_context_create(array(
	"ssl" => array(
	    "allow_self_signed" => true,
	    "verify_peer"       => false,
	    "verify_peer_name"  => false,
	    "verify_expiry"     => false

$client = new MongoDB\Client("mongodb://localhost:27017", 
				array("ssl" => true), 
				array("context" => $ctx)

$dat = new stdClass();
$dat->version = '2017/12/21 11:01pm EST (GMT -5) America/New_York or Atlanta';
$tab = $client->mytest->stuff;

Dec 18 - MongoDB (with PHP, etc.)

I started using relational (SQL) databases in 1997. Finally in the last few years, though, I've seen a glimmer of the appeal of OO / schema-less / noSQL / whatever databases such as MongoDB. For the last few months I've been experimenting with Mongo for my personal projects. I'm mostly liking what I'm seeing. I haven't quite "bitten" or become sold, but that's probably coming. I see the appeal of simply inserting an object. On the other hand, I've done at least one query so far that would have been far easier in SQL. (Yes, I know there are SQL-to-Mongo converters, but the one I tried wasn't up to snuff. Perhaps I'll keep looking.)

I've been using Robo 3T (v1.1.1, formerly RoboMongo) as the equivalent of MySQL Workbench. I've liked it a lot. In vaguely related news, I found it interesting that some of the better Mongo-PHP examples I found were on Mongo's site and not PHP's. The PHP site seems rather confused about versions. I'm using the composer PHP-Mongo library. Specifically, the results of "$ composer show -a mongodb/mongodb" are somewhat perplexing, but they include "versions : dev-master, 1.3.x-dev, v1.2.x-dev, 1.2.0 ..." At the MongoDB command line, db.version() == 3.4.7. I don't think Mongo 3.6 comes with Ubuntu 17.10, so I'm not jumping up and down to install "the hard way," although I've installed MDB "the hard way" before.

Mostly I'm writing this because I've been keeping that PHP link in my bookmarks bar for weeks. If I publish it, then I don't need the link there in valuable real estate. Although in a related case I forgot for about 10 minutes that I put my Drupal database timeout fix on my web site. Hopefully I'll remember this next time.

Dec 17, 2017

Today's entry 2 - yet another Google Apps Script / Google Calendar API error and possible Google bug

I solved this before I started the blog and wrote about the other errors below. The error was "TypeError: Cannot find function createAllDayEvent in object Calendar." This was happening when I called "CalendarApp.getCalendarById(SCRIPT_OWNER);" twice within a few lines (milliseconds or less) of each other. The failure rate was something like 10 - 15% until I created the global. The solution is something like this:

var calendarObject_GLOBAL = false;

function createCalendarEntry(summary, dateObject) {
	var event = false;
	event = calendarObject_GLOBAL.createAllDayEvent(summary, dateObject);	

calendarObject_GLOBAL = CalendarApp.getCalendarById(SCRIPT_OWNER); // calendar object

createCalendarEntry('meet Bob at Planet Smoothie', dateObject123);

I'm not promising that runs; it's to give you the idea. Heaven forbid I post proprietary code, and there is also the issue of taking the time to simplify the code enough to show my point. I should have apprentices for that (hint, hint).

I was getting errors when I called CalendarApp... both inside and outside the function. I suspect there is a race condition bug in Google's code. We know the hard way how fanatical they are about asynchronicity. Sometimes that's a problem.

Yes, yes. I'm being sarcastic, and I may be wrong in my speculation. I understand the benefit of all async. But isn't part of the purpose of a blog to complain?

Today's entry 1

I just updated my Drupal database connection error article

Dec 6, 2017 - today's entry 2 - fun with cups and Drupal runaway error logs

I just discovered that /var/log/cups was using 40GB. Weeks ago I noticed cups was taking 100% of my CPU (or one core, at least) and writing a LOT of I/O. It was difficult the remove it entirely. The solution was something to the effect of removing not only the "cups" package but the cups-daemon. cups is a Linux printing process. I haven't owned a working printer in about 6 years, and I finally threw the non-working one away within the last year.

I've had the same runway log problem with Drupal writing 1000s of warnings (let alone errors) to "watchdog." It took me a long time to figure out that's why some of my Drupal processes were so slow. It seems that Drupal should simply stop logging errors after a certain number of iterations rather than trash the disk for minutes. If I cared about Drupal, perhaps I would lobby for this, but I have come somewhere close to despising Drupal, but that's another story for another time.

Dec 6, 2017 - fun with systemd private tmp directories

This happens when you just want to use /tmp from Apache, but no, you get something like /tmp/systemd-private-99a5...-systemd-resolved.service-Qz... owned by root and with no non-root permission. (Yes, yes, I have root access. That's not the point.) Worse yet, there are bunch of such systemd directories, so which one are you looking for? Yes, yes, I'm sure there is a way to know that. Also not the point. The point is: please just make it stop!

Solution (for Ubuntu 17.10 Artful Aardvark)

  1. with root permission, open for editing: /etc/systemd/system/multi-user.target.wants/apache2.service
  2. Modify this line from true to false: PrivateTmp=false
  3. run this: sudo systemctl restart apache2.service
  4. I don't think you need to restart apache (see note below), but I'm not sure. I did restart Apache, but I didn't try it without restarting Apache.


I don't even know if restarting the apache2.service is the same thing as restarting Apache or not. On this point, it is worth noting that sometimes you have to stop going down the rabbit hole, or you may never accomplish what you set out to do. Yes, I should figure out what this systemd stuff is. Yes, I should know if the apache2.service is separate from Apache. One day. Not when I'm trying to get something very simple accomplished, though. Also, yes, I understand the purpose of a root-only private directory under /tmp. Yes, I understand that /tmp is open to all. But none of that is the point of this entry.

If you can't tell, I'm a bit irritated. Sometimes dev is irritating.

For purpose of giving evidence to my night owl cred, I'm about to post at 2:24am "my time" / EST / US Eastern Standard Time / New York time / GMT -5 / UTC -5.

2017, Nov 14 (entry 5)

I did launch with entry 4.

I just took an AWS EC2 / EBS snapshot of an 8GB SSD ("gp2") volume from my Kwynn.com "nano" instance at US-east-1a. With my site running, it took around 8 minutes. The "Progress" showed 0% for 6 - 7 minutes, then briefly showed 74%, then showed "available (100%)." It ran from 2:55:34AM - around 3:03am. My JS ping showed no disruption during this time. CPU showed 0%. I didn't try iotop. (Processing almost certainly takes place almost if not entirely outside of my VM, so 0% CPU makes sense.)

This time seems to vary over the years and perhaps over the course of a day, so I thought I'd provide a data point.

Entry 4 and launch attempt 2

I wrote entries 1 - 3 at the end of October, 2017, but I have not posted this yet. I'm writing this on Friday, November 10 at 7:34pm EST (Atlanta / New York / GMT -4). I mention the time to emphasize my odd hours. See my night owl developer ad.

I'm writing right now because of my night owl company (or less formal association) concept. My potential apprentice whom I codenamed "Mr. 4.6 Hours" has been active the last few days. I'd like to think I'm getting better at the balance between lecturing, showing examples, and leaving him alone and letting him have at it. I think he's making progress, but he's definitely making *me* think and keeping me active. Details are a longer story for another time. Maybe I'll post some of my sample code and, eventually, his code.

He's not around tonight, and I miss the activity. As I said in the ad, I'd like to get to the point that I always have a "green dot" on Google Chat / Hangouts or whatever system we wind up agreeing on.

Based on the last few days, I have a better idea of how to word my ad and the exchange I want with apprentices. Perhaps I'll write that out soon.

Entry 1: dev rule #1

Kwynn's Software Dev Rule #1: Never develop without a debugger. You will come to regret it. To clarify terms, by "debugger," I mean a GUI-based tool to set code breakpoints, watch variables, etc. Google Chrome Developer Tools "Sources" tab is a debugger for client-side JavaScript. Netbeans with Xdebug is a debugger for PHP. Netbeans will also work with Node.js and Python.

It is tempting to violate this rule because you think "Oh, I'll figure it out in another few minutes."

Another statement of this rule is "If you're 'debugging' with console.log or print or echo, you're in big trouble."

Entry 2: Google Apps Script and StackOverflow.com

I've been considering a blog for months if not years. I finally started because of this problem I'm about to write about.

This blog entry deals with both the specific problem and a more general problem.

The specific problem was, in Google Apps Script (GAS), "Server error occurred. Please try saving the project again". The exact context doesn't really matter because if you come across the problem, you know the context.

I spent about an hour chasing my tail around trying variations and otherwise debugging. At some point I tried to find info on Google itself. Google referred "us" to StackOverflow.com (SO) with the [google-apps-script] label. Google declares that to be the official trouble forum. As it turned out, someone else was having the same problem. I joined SO in order to respond. Then roughly 4 others joined in. We were all having the same problem, and nothing we tried fixed it. I am 99% sure it was a Google server problem and there was nothing we could do. The problem continued during that night. Then I was inactive for ~14 hours. By then, everything worked.

The more general problem I wanted to address is the way SO's algorithms handled this. The original post and my response are still there several weeks later. However, others' perfectly valid responses were removed. To this day, SO still says, "Because [this question] has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site..."

This sort of algorithmic failure troubles me. I'd like the memory of those deleted posts on the record.

I was motivated to write about this because I encounted another GAS error a few hours ago that I once again suspect is a server error. This time, I was the one who started the thread. 2 hours later, no one has answered. I'm curious how this turns out. I'm not linking to the thread because it's still possible I caused the problem. Also, I'm not linking to it because Google almost immediately indexed it, so SO is the appropriate place to go.

Entry 3: dev rule #2

My first GAS and perhaps the 2nd, if it is indeed a server problem, bring up my rule #2:

Kwynn's software dev rule #2: always host applications on a site where you have root access and otherwise a virtual machine--something you have near-total control over. It should be hard to distinguish your control of the computer sitting next to you versus your host.

Amazon Web Services (AWS) meets my definition. AWS is perhaps one of the greatest "products" I've ever come across. It does its job splendidly. When they put the word "elastic" (meaning "flexible") in many of their products, they mean it.

Others come close. I used Linode a little bit; it's decent. I have reason to believe Rackspace comes close. I am pretty sure that neither of them, though, allow you to lease (32-bit) IP addresses like AWS does. I am reasonable sure getting a 2nd IP address with Linode or Rackspace is a chore--meaning ~$30 and / or human intervention is involved, and / or a delay. With Amazon, a 2nd IP address takes moments and is free as long as you attach it to an (EC2) instance.

This rule is less absolute than #1. Violating always leads to frustration, though, and wasted time. Whether the wasted time is made up for by the alleged benefits of non-root hosts is a question, but I tend to think not. I've been frustrated to the point of ill health, though--one of the very few times I've *ever* been sick. That's a story for another time, though.

If it's not clear, using GAS violates the rule because of the situation where there is nothing you can do. I had some who-knows-the-cause problems with AWS in late 2010, but I've never had a problem since. If, heaven forbid, I did have a problem, I could rebuild my site in another Amazon "availability zone" pretty quickly. As opposed to just being out of luck with GAS.

Why I violate the rule with GAS is another story, perhaps for another time. I'll just say that if it were just me, I'd probably avoid GAS. With that said, some time I should more specifically praise some features of GAS as it applies to creating a Google Doc. I was impressed because given the business logic limitations I was working with, GAS was likely easier than other methods.

HTML5 valid