home

tech blog

2021, July 26 - VOIP - under revision, first around 12:50am, then 1:01am+, then 9:19pm

The result of the following is that I reserved a phone number and dialed it and got literally "hello world" from my Asterisk server.

Asterisk

I answered an ad about VOIP. The key of the project was that the client needs to be able to leave more-or-less arbitrarily long voice messages. I haven't gotten to the point of just how long, but definitely well over 10 minutes. I would guess that an hour is needed. The problem they had is that they talked to 15 VOIP providers and noone went over 10 minutes.

I had a brush with a VOIP project in early 2016, and I've always wondered "What if?" I played some with the Asterisk software but couldn't make much of it. I compiled it and had it running in the barest sense, but didn't get it to do anything. Asterisk is of course free and open source.

In part because I had unfinished business from 2016, I started experimenting. Then I got obsessed and started chasing the rabbit. After about 21 hours of work spread over a week or so, I have most of the critical elements I need in two "pieces"--part in the cloud and part on my own server.

Here is an attempt at an edited version of my Asterisk install command history. One important note is that some of that was probably tail chasing versus:
sudo ./install_prereq
sudo ./install_prereq install

Then I changed 4 config files.

Probably more to come, but I have an apprentice live right now reading this.

AWS

In almost all cases, the AWS documentation is excellent. In this case, I chased my tail around. In the end, I got somewhat lucky. Of all the weird things, I have the darndest time finding the right AWS console. The link is for the AWS Chime product including "voice connectors." So THERE is the console link.

I have the "hello world" voice which will probably download and not play. Someday perhaps I'll make it play. It's a lovely, sexy female voice--a brilliant choice on the part of the Asterisk folk. REVISION: I got some grief over "sexy." Perhaps she's only sexy when you've spent 21 hours getting to that point.

I just confirmed that the Chime console does not save in your "recently used" like everything else does. So I'm glad I recorded the link.

At the Chime console, you'll need the 32 bit IP (IPv4) address of your VOIP server, or domain name. With only a bit of trying and study, I could not get 128 bit IP addresses (IPv6) to work--they were considered invalid.

  1. At the Chime console, go to "Phone number management," then "Orders," then "Provision phone numbers."
  2. Choose a "Voice Connector" phone number. (I am using SIP, but don't chose that option.)
  3. Choose local or toll free, then pick a city, state, or area code. Pick a number or numbers and "provision."
  4. After "provision" / ordering, it may take roughly 10 seconds to show up in the "Inventory" tab. You can use the table-specific refresh icon to keep checking (no need to refresh the whole page)
  5. Go to "Voice connectors" and "Create a new voice connector"
  6. The name is arbitrary but I believe there are type-of-character restrictions
  7. You'll want the same AWS region as the VOIP / SIP server.
  8. I have not tried encryption yet, so I disable it. (One step at a time.)
  9. "Create"
  10. click on the newly created connector
  11. Go to the "origination" tab
  12. Set the "Origination status" to Enabled
  13. Click a "New" "Inbound route"
  14. Enter the IP address or domain of the Asterisk "Host"
  15. the port is 5060 by default
  16. protocol is whatever you set the VOIP server to. I used TCP for a test only because it's more definitive to tell if it's listening
  17. set priority and weight to 1 for now. It's irrelevant until you have multiple routes.
  18. Add
  19. Save (This addition step trips me up.)
  20. Go to the "phone numbers" tab and "assign from invenstory." Select your phone number and "assign..."
  21. Set /etc/asterisk/extensions.conf to the phone number you reserved (see my conf examples above)
  22. Restart Asterisk if you changed the number. There is a way to do it without restart.
  23. make sure Asterisk is running - I find it best to turn it off at the systemctl level and simply run "sudo asterisk -cvvvvvvv" Leave the Asterisk prompt sitting open so you can see what happens
  24. open up port 5060 at the AWS "security group" level for that instance
  25. Dial the number and listen to "Hello world!"

2021, July 8 - zombie killing

I can now add zombie killing to my resume. I logged into this website roughly 30 minutes ago and was greeted with the "motd / message of the day" message that there were 75 zombie processes. I barely knew what a zombie is.

First I had to find out how to ID a zombie. The answer is ps -elf | grep Z    My new "simptime" / simple time server was causing the problem.

It didn't take long to more or less figure out what a zombie is, but it took just slightly longer to find what to do about it. When a process forks, the parent is supposed to be fully attentive waiting to receive the exit / return value of the child, or it is supposed to make itself available (signal handler) to receive the value. If the parent is sleeping or waiting for something else, the parent never reads the return, and the child's entry stays in the process table. The child is dead and not using any other resources, but one potential problem is that the process table fills up. Another problem is that the ps command (depending on switches) shows a bunch of "defunct" entries. (Similarly, there may be more entires in /proc/).

A Geeks for Geeks zombie article explained how to stop the zombies; I chose the SIG_IGN option which tells the OS that the parent doesn't care what the exit value is, so the child's process entry is removed. I don't care because, for one, I have other ways of testing whether the system is working. For another, the parent can't "wait()" in my case because its job is to immediately start listening for more connections. Another option is a signal handler, but there is almost no benefit to the parent knowing the value in my case. Again, I have other ways of testing whether everything is working.

2021, July 5 - yet another round with a blasted CMS

I have encoded below my software dev rule #4 about being careful of CMSs. I got burned again last night--Happy July 4 to me! I am building an Ubuntu 21.04 environment from scratch as opposed to upgrading. There are several reasons, but I suppose that is another story. Anyhow, I was trying to get Drupal 7 to run in the new environment. Upon a login attempt, I kept getting a 403 error and "Access denied" and "You are not authorized to access this page" even though I was definitely using the right password.

To back up, first I was getting "PHP Fatal error: Uncaught Error: Undefined class constant 'MYSQL_ATTR_USE_BUFFERED_QUERY' in /.../includes/database/mysql/database.inc" Thankfully I remembered that it's Drupal's crappy way of saying "Hey, you don't have php-mysql installed," so sudo apt install php-mysql Note that you have to restart Apache, too.

Similarly, Drupal's crappy way of saying "Hey, you don't have Apache rewrite installed" was a much more tangled path. I foolishly went digging in the code with the NetBeans debugger. This is a case of "When you're not in the relevant parts of Africa, and you see hoof prints, think horses, not zebras." I assumed a problem with Drupal rather than the obvious notion that something wasn't set up right.

I eventually got to code that made it clear that the login was not being processed at all. By looking at the conditions, I eventually realized that Drupal wasn't receiving the login or password. Then I realized that none of $_REQUEST, $_POST, or $_GET were showing the login and password. So I searched on that problem and quickly realized that it was a rewrite / redirect problem.
sudo a2enmod rewrite
sudo systemctl restart apache2

Problem solved! I won't admit after how long.

I was inspired to write some code for the "Never again!" category (a more legitimate use of the phrase than some, I might add).

2021, March 4 - 5 - Robo3T copy

The makers of Robo3T have started asking for name and email when you download. R3T is of course free and open source (software - FOSS), as is almost everything I use. I got the latest version directly from them, but I thought I'd provide it for others. Providing it for others is part of the point of FOSS.

Download - robo3t-1.4.3-linux-x86_64-48f7dfd.tar.gz

SHA256(robo3t-1.4.3-linux-x86_64-48f7dfd.tar.gz)= a47e2afceddbab8e59667facff5da249c77459b7e470b8cae0c05d5423172b4d
Robo 3T 1.4.3 - released approximately 2021/02/25	

I'm messing with this entry as of the 5th at 12:08am my time. I first posted it several minutes ago.

2021, Jan 31 - yet more on time measurement and sync

I'll go back a year and try to explain the most recent manifestions of my time-measuring obsession. I wasn't so much interested in keeping my computer's time super-accurate as I was interested in how to compare it with "official" time. Otherwise put, how do I query a time server? The usual way turned out to be somewhat difficult. (It just occurred to me a year later that perhaps NTP servers don't check the incoming / client time info. Or perhaps they do. In any event...) The usual way is first demonstrated in my SNTP (simple NTP) web project (GitHub, live).

During those explorations, I found the chrony implementation of the network time protocol (NTP). This both keeps "super" accurate time, depending on conditions, and it tells you how your machine compares to "official" time. That kept me happy for a while, but then I started wondering about the numbers chrony gives me.

So I updated the web SNTP code and made a command line (CLI command line interface) version. (Note that in that case I'm linking to a specific version because that code will likely move soon.) In good conditions, that matches chrony's time estimate well enough. Good conditions are AT&T U-Verse DSL at a mere 14 Mbps download speed accessed through wifi with 60 - 80% signal strength. Both U-Verse and my wifi signal are very, very stable. (I think it's still called DSL, even after ~22+ years. It involves something that looks like a plain old telephone line, although I can't be sure it's the same local wireing as 40 years ago.)

I can use the "chronyc tracking" command to get my time estimate, or I wrote a tabular form of it.

Below are my chrony readings as of moments ago (5:40pm my time). I'm removing some less-relevant rows.

/chronyc$ php ch.php
 mago    uso    rdi      rf    sk   rde      f
145.3     +0  50.91    -0.18  13.1   65   -7.794 
 96.3   +719   1.40    40.71  13.1   36   -7.794 
 95.2    -56   0.97    -0.20  10.5   37   -1.487 
 89.3    -63   1.59    -0.05   1.9   36   -5.476 
  1.8    +10   1.06    -0.00   0.3   36   -7.450 

Weeks later... I'm going to let this post die right here, at least for now. I hadn't posted this as of March 3.

2021, Jan 29 - chrony continued

As a follow up to my previous entry, now I've set minpoll / maxpoll to 1 / 2 with my cellular network. THAT gets results. My offset time approaches that of a wired connection, and it's the same with root disperson and skew.

2021, Jan 28 - chrony on wired versus wireless

chrony is a Network Time Protocol (NTP) client / server; in other words, it helps computers keep accurate time by communicating time "readings" over the internet.

In the last few weeks I have set chrony to use kwynn.com as its time source. Kwynn.com lives on Amazon Web Services (AWS). AWS has a time service, and my "us-east" AWS region is physically close to the NIST time servers in Maryland. Right now I have a root disperson and root delay of around 0.3ms, and my root mean square offset from perfect time is 13 microseconds (us or µs). I have 3 - 5 decimal places after that, but I won't bore you any more than I already am. The point being that it's probably just as good or better than using the NIST servers.

I've tested kwynn.com versus using it plus other servers in the Ubuntu NTP pool, and kwynn.com is much, much better. This is one of several stats that I may quantify one day, but I want to get the key point out because I found it interesting and want to record it for myself as much as anything.

Among other features, chrony has the "chronyc tracking" command that gives you an estimate of your clock's accuracy and various statistics around that estimate. Then I check check chronyc against I script I wrote that polls other servers and outputs the delay, including an arbitrary number of polls of kwynn.com. Sometimes I'll query kwynn.com 50 times, seeking the fastest turnaround times which in theory should be the best. I call this my "burst" script.

On AT&T UVerse (I think that's still "DSL.") at what is probably the slowest available speed (14 Mbps / 1.4 MBbs), chrony is very stable. What chrony says versus "the burst" is very close.

On my T-Mobile (MetroPCS) hotspot, things get more interesting. Sometimes when I cut over from AT&T to wireless, my time gets pretty bad and the chronyc readings are very unstable. This evening it was so bad that I changed my minpoll / maxpoll to 2 / 4. (Depending on my OCD and my mood, I tend to have it on 4 - 5 / 6 - 7.) Note that you should not use such numbers or even close with the NTP poll, and you may or may not get away with it using NIST--please check the fine print.

When I set min / max to 2 / 4, that's when things got interesting. On one hand, the chronyc numbers stabilize to the point that they get close to wired numbers. On the other hand, comparison to "the burst" is not nearly as "convincing" / close as wired. That is, chrony claims accuracy in a range of 100 - 300 us, but it's hard to get a "burst" to show 3 - 4 ms. The burst almost never shows time as good as chrony claims, but that's another discussion.

Otherwise put, with a low poll rate on wireless, chronyc claims to be happy and shows good numbers, but agreement with the burst is not nearly as close.

This is mostly meant as food for thought, and perhaps I'll give lots of gory details later. I mainly wanted to record those 2 / 4 numbers, but I thought I'd give some context, too.

2021, Jan 23 - detecting sleep / hibernate / suspend / wakeup in Ubuntu 20.04

In Ubuntu 20.04 (Focal Fossa), executables (including scripts with the x bit set) placed in /lib/systemd/system-sleep/ will run upon sleep / hibernate / suspend and wakeup. This is probably true of other Debian systems. I mention this because for some distros it's /usr/lib/systemd/system-sleep/

One indicator I had is that the directory itself already existed and 2 files already existed in it: hdparm and unattended-upgrades. There are some comments out there that /lib/... is correct for some Debian systems, but I thought this was worth writing to confirm.

example script

/lib/systemd/system-sleep$ sudo cat kw1.sh
#!/bin/bash 
echo $@ >> /tmp/sleeplog
whoami  >> /tmp/sleeplog
date    >> /tmp/sleeplog
	

The bits:

/lib/systemd/system-sleep$ ls -l kw1.sh
-rwxrwx--- 1 root root 158 Jan 23 18:18 kw1.sh
	

output:

$ cat /tmp/sleeplog
pre suspend
root
Sat 23 Jan 2021 01:39:49 AM EST
post suspend
root
Sat 23 Jan 2021 06:08:02 PM EST
	

The very careful reader will note that the script above is less than 158 bytes. I added a version number and a '******' delimeter after the first version. I'm showing just the basics, in other words, and I'm showing the parts that I know work.

2020, Nov 20 - arbitrary files played as "music"

As part of my now-successful quest for randomness from the microphone, I came across non-randomness from a surprising place. I generated the following audio file with these steps:

dd if=~/Downloads/ubuntu-20.04.1-desktop-amd64.iso of=/tmp/rd/raw.wav bs=2M count=1
ffmpeg -f u8 -ar 8k -ac 1 -i /tmp/rd/raw.wav -b:a 8k /tmp/rd/ubulong.wav
ffmpeg -t 1:35 -i /tmp/rd/ubulong.wav /tmp/rd/ubu95s.wav
chmod 400 /tmp/rd/ubu95s.wav
mv /tmp/rd/ubu95s.wav /tmp/rd/ubuntu-20-04-1-desk-x64-95-seconds.wav

Turn your speakers down! To about 1/4 or 1/3. I now present Ubuntu Symphony #1 - opus 20.04.1.1. There is a bit of noise for less than 2 seconds, then about 3 seconds of silence, and the nearly continous sound.

I posted several versions quickly. This time I'm stopping at 6:27pm on posting day.

2020, Oct 15 - SEO

In the last few weeks I finally took a number of SEO steps for this site. I'd been neglecting that for years. I registered the httpS version of kwynn.com with Google, and I created a new sitemap with a handful of httpS links.

A few weeks after the above, I got some surprising Google Search Console results. I have 247 impressions over 3 months for my PACER page. I only have 6 clicks, and I suspect that's because the page's Google Search thumbnail / summary / whatever shows an update date of November, 2017, which is incorrect. Soon I am going to attempt to improve that click through rate.

limitations of RAM, speed, etc. 2020, Oct 7 - entry 2 of the day

My only active apprentice just bought an ArduinoBoy in part because he is fascinated to wrestle with 1980-era limitations of RAM and such. As I discussed with him, I am not disuading him from that. However, I wanted to give him something to think about.

Last night I managed to crash several processes and briefly locked up my session because I didn't consider that there are still limitations on relatively modern hardware. It's much harder to do that much (temporary) damage today than it was in 1995 or 2003, but it's still possible.

Generally speaking, I was testing something that involved all cores at once and as many iterations as I could get. I got away with 12 cores times 2M iterations (24M data points total). Then I ran that again without wiping my ramdisk (ramfs), so I was able to test 48M data points. Then when I tried to run 12 X 8M = 96M, my system went wonky.

I have not done a post-mortem or simple calculations to know what specifically went wrong. I probably exceeded the RAM limitation set in php.ini. I may have exceeded system RAM, but I don't think so. What is odd is that my browser crashed, and it was just sitting there innocently. It was not involved in the wayward code. All the CPUs / cores were pegged for a number of seconds, but that shouldn't have that effect.

Maybe he'll want to figure out what went wrong and how to most efficiently accomplish my testing?

On a related point, one thing I learned is that file_put_contents() outputing one line at a time simultaneously from 12 cores does not work well, which makes perfect sense with a few moments of thought. So I saved the data in a variable until the "CPU stuff" was done and then wrote one file per process. (fopen and fwrite were not notably faster in that case.)

So how do I accomplish the testing I want with as many data points as possible, as fast as possible, without crashing my session (or close enough to crashing it)? The question of limitations applies on a modern scale.

Apparently the current version of the code is still set for 96M rows. The October 3 entry of my GitHub guide explains what I was doing to a degree. I'll hopefully update that page again sometime this week, and try to explain it better.

I also observed several weeks ago that forking processes in an infinitely loop will very thoroughly crash the (boot) session to the point of having to hold down the start button. Up until very roughly 2003, when I was still using Satan's Operating System, any infinite loop would crash the session. Now a client-side JS infinite loop will simply be shut down by the browser, and similarly contained in other situations. But infinitely forking processes on modern Ubuntu will get you into trouble. I suppose that's an argument for both a VM and imposing quotas. I took the quota route.

As best I remember, the code in question was around this point (AWS EC2 / CPU metrics process control).

new rules of software dev - numbers 3 and 4 - 2020, Oct 7 entry 1 of the day

The first two rules are at the beginning of this blog.

Kwynn's rule of software dev #3:

Never let anyone--neither the client nor other devs--tell you how to do something. The client almost by definition tells you what he wants done, not how.

This applies mainly for freelancing, or perhaps one should freelance in order to not violate the rule.

I should have formulated this in 2016 or 2017. I finally had one last incident in the summer of 2020 that caused me to formalize it, and now I'm writing it out several weeks later.

To elaborate on the rule, if you know all the steps necessary to do something in a certain way, do it. After it's done your way, no one is likely to argue with you. If you try to do it someone else's way, you are likely to waste a lot of time and money.

An example is beware of when the client requests that you do the quick fix. If your way is certain and the quick fix is uncertain, by the time you do the quick fix, you would have both fixed the problem and had a better code base by doing it your way.

Another statement of the rule is to beware of assuming that others know more than you do. Specifically beware of those who you may think are developers but are actually developer managers or salespeople with delusions of developing. I once knew a developer manager who exemplified the notion "He knows just enough to be dangerous." He led me into danger.

Kwynn's rule of software dev #4:

Custom-written software is often the best long-term solution. Be very careful of content management systems, ERP systems, e-commerce systems, etc.

To quote a comedian from many decades ago, "I went to the general store, but I couldn't buy anything specific." That reminds me of WordPress, Drupal, OpenERP (I doubt Odoo is any better.), etc. There is plenty more to say on this, but it will have to wait.

July 18, 2020

Some words on JavaScript var, let, const. I'll admit to still being fuzzy on some fine points, but here are some rules of thumb I've come up with that are well battle tested:

June 21, 2019

Over the last several weeks, I ran into 5 - 6 very thorny problems. Let's see if I can count them. About all I'm good for at this moment is writing gripy blog posts, if that.

My June 12 entry refers you to the drag and drop problem and the hard refresh problem. Those are 2 of the problems.

I just wrote an article on networking bridging and using MITM (man in the middle) "attacks" / monitoring. Getting both of those to work was a pain. The bridging took forever because the routing table kept getting messed up. The MITM took forever because it took me a lot of searching to find the necessity for the ebtables commands.

After I solved the Firefox problems mentioned on June 12, I ran into another one. The whole point of my "exercise" for calendar months (weeks of billable time) was to rewrite the lawyer ERP timecards such that they loaded many times faster. They were taking 8 seconds to load, and *I* did not write that code.

Load time was instant on my machine. Everything was good until I uploaded the timecard to the Amazon nano-instance. Then the timecards took 30 - 45 seconds to load. The CPU was pegged that whole time. So, I'm thinking, my personal dev machine is relatively fast. The nano instance is, well, nano. So, I figured, "More cowbell!". At a micro-instance, RAM goes from 0.5 GB to 1GB. That appeared to be enough to keep the swap space usage to near zero. No help. Small--nope: no noticable change. At medium, CPUs go from 1 to 2. Still no change. I got up to the one that costs ~33 cents an hour--one of the 2xlarge models with 8 CPUs. Still no change. WTF!?!

I had started to consider the next generation of machines with NVMe (PCI SSDs). My dev machine has NVMe, so maybe that's part of the problem. However, iotop didn't show any thrashing. It was purely a CPU problem.

So, upon further thought, it was time to go to the MySQL ("general") query log. The timecard load was so slow that I figured I might see the query hang in real time. Boy, did I ever! I found one query that was solely responsible. It took 0.13s on my machine and 46s on an AWS nano (and much more powerful). That's 354x.

The good news was that I wrote the query, so I should be able to fix it, and it wasn't embedded hopelessly in 50 layers of Drupal feces. (I did not choose Drupal. I sometimes wish I had either passed on the project or seized power very early in my involvement. My ranting on CMSs will come one day.)

I thought I isolated which join was causing trouble by taking query elements in and out. I tried some indexes. Then I looked at the explain plan. It's been a long time since I've looked at an explain plan, but I didn't see anything wrong.

My immediate solution was to take out the sub-feature that needed the query. That's fine with my client for another week or two. Upon yet more thought, I should be able to solve this easily by using my tables rather than Drupal tables. I've written lots of my own tables to avoid Drupal feces. It turns out that using my tables is a slightly more accurate solution to the problem anyhow.

One of the major benefits of using AWS is that my dev machine and the live instance are very close to identical in terms of OS version, application versions, etc. So this is an interesting example of an exponential effect--change the performance characteristics of the hardware just a bit, and your query might go over the cliff.

I guess it's only 5 problems. It seemed like more.

June 12, 2019 - a week in the life

I created a new page on some of my recent frustrations--frustrations more than achievements. We'll call it "a week in the life." I thought browser differences were so 2000s or 200ns (2000 - 2009).

March 9, 2018 - upgrading MongoDB in Ubuntu 17.10

This started with the following error in mongodump:

Failed: error dumping metadata: error converting index (<nil>): conversion of BSON value '2' of type 'bson.Decimal128' not supported

Here is my long-winded solution.

March 8, 2018 - anti-Objectivist web applications

I was just sending a message on a not-to-be-named website, and I discovered that it was eliminating the prefix "object" as in "objective" and "objection." It turned those words into "ive" and "ion." Of course, it did it on the server side, silently, such that I only noticed it when I read my already-sent message. The good news is that the system let me change my message even though it's already sent. I changed the words to "tangible" and "concern."

I have been teaching my apprentice about SQL injection and what I call the "Irish test": Does your database accept "O'Reilly" and other Irish names? This is also a very partial indication that you are preventing SQL injection. Coincidentally, I emailed a version of this entry to someone with such an Irish name. So far, sending him email hasn't crashed GMail. They probably use Mongo, though.

If you haven't guessed, what's happening in this case is eliminating "object" because it might be some sort of relative to SQL injection. I thought I've seen evidence that the site is written in PHP, but, now that I look again, I'm not as sure. This is knowable, but I don't care that much. I don't think "object" is a keyword in either PHP or JavaScript. (Yes, I suppose I should know that, too, but what If I chased down every little piece of trivia?!) In any event, someone obviously got a bit overzealous, no matter what the language.

I will once again posit to my apprentice that I don't make this stuff up.

The final word on SQL injection is, of course, this XKCD comic. I must always warn that I am diametrically opposed to some things Munroe has said in his comic. I would hope he goes in the category of a public figure, and thus I can call him an idiot-savant. Then again, he more or less calls himself that about every 3rd comic. He's obviously a genius in many ways, but he epically misses some stuff. One day, this tech blog might go way beyond tech, but I'm just not quite there yet, so I'm not going to start exhaustively fussing at Randall.

Mar 1, 2018 - LetsEncrypt / certbot renewal

This is the command for renewing an SSL cert "early":

sudo certbot renew --renew-by-default

Without the --renew-by-default flag, I can't seem to quickly figure out what it considers "due for renewal." Without the flag, you'll get this:

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/[domain name]/fullchain.pem (skipped)
No renewals were attempted.

I should have the rate limits / usage quotas under "rate limits."

An update, moments after I posted this: the 3 week renewal emails are for the "staging" / practice / sandbox certs, not the live / real ones. I wonder when or if I'd get the live email? Also, I won't create staging certs again, so those won't help remind me of the live renewals again. I'll put it on my calendar--I'm not relying on an email--but still somewhat odd.

The email goes to your address in your /etc/letsencrypt/.../regr.json file, NOT the Apache config. I say ... because the path varies so much. grep -iR [addr] will find it.

Feb 2, 2018 - base62

Random base64 characters for passwords and such annoy me because + and / will often break a "word"--it's hard to copy and paste the string, depending on the context. Thus, I present base62: the base64 characters minus + and /. I considered commentary, but perhaps I'll leave that as the infamous "exercise to the reader." However, I do have a smidgen of commentary below.

Example

Assuming you call the file base62.php, give it exe permission, and execute from the Linux command prompt:

./base62.php 50
vjQBjFxJGcotOpxVJyvG1CUQ11010xigP1RyuKza120JWeFkeI

Validation

./base62.php 1000 | grep -P [ANZanz059]

That's my validation that I see the start, end, and midpoints of my 3 sets (arrays) of characters.

UQID

In the event that Google doesn't look inside the textarea, UQID: VMbAlZQ13ojI. That was generated with my brand new scriptlet. So far that string is not indexed by Google. UQID as in unique ID. Or temporarily globally unique ID. Or currently Google unique ID (GOUID?). Presumably it isn't big enough to be unique forever. 62^12 = 3 X 10^21. That's big but not astronomical. :)

somewhat-to-irrelevant commentary

What can I say? Sometimes I amuse myself. Ok. My structure is on the obtuse side. I couldn't help it. I usually don't write stuff like that. Perhaps Mr. 4.6 or one of my more recent contacts can write the clearer version. I actually did write clearer versions, but, then, I couldn't help myself.

further exercise to the reader

Perhaps someone will turn this into a web app? Complete with nice input tags and HTML5 increase and decrease integer arrows and an option to force SSL / TLS and AJAX.

installing

sudo cp base62.php /usr/bin
cd /usr/bin
ln -s ./base62.php base62
cd /tmp
base62
[output =] RyH3HjGnEalr71meSJfm

Now it's part of my system. I changed to /tmp to make sure that . PATH wasn't an issue--that it was really installed.

Reference

Jan 28, 2018 - Stratego / Probe

I'd like to recommend Imersatz' Stratego board game implementation called Probe. It is the 3 time AI Stratego champion. The AI plays against you. It's a free download; see the "Download" link on that page. From a human who is good at the game's point of view, I would call it quasi-intelligent, but it beats me maybe 1 / 7 times, so it's entertaining.

I am running the game through WINE, the Windows Emulator for Linux. I just downloaded it to make sure it matches what I downloaded to this new-to-me computer months ago. It does. Below I give various specs. Those are to make sure you have the same thing I do. It hasn't eaten my computer or done anything bad. I have no reason to think it's anything but what it says it is. In other words, I am recommending it as non-malware and fun. If it makes you feel any better, you can see this page in secure HTTP.

Probe2300.exe [the download file]
19007955 bytes
or 19,007,955 bytes / ca. 19MB
SHA512(Probe2300.exe)= e96f5ee67653eee1677eb392c49d2f295806860ff871f00fb3b0989894e30474119d462c25b3ac310458cec6f0c551304dd2aa2428d89f314b1b19a2a4fecf82
SHA256(Probe2300.exe)= ee632bcd2fcfc2c2d3a4f568d06499f5903d9cc03ef511f3755c6b5f8454c709

The above is the download file from Imersatz. In the probe exe directory, I get:

1860608 [bytes] Feb 28  2013 Probe.exe
 800611         Feb 28  2013 Probe.chm
1291264         Feb 28  2013 ProbeAI.dll

SHA256(ProbeAI.dll)= 13e862846c4f905d3d90bb07b17b63c915224f5a8c1284ce5534bffcf979537a
SHA256(Probe.chm)= 3b7be4e7933eee5d740e748a63ea0b0216e42c74a454337affc4128a4461ea6b
SHA256(Probe.exe)= 656f31d546406760cb466fcb3760957367e234e2e98e76c30482a2bbb72b0232

Jan 14, 2018 - grudgingly dealing with Mac (wifi installation)

The first time Mr. 4.6 installed Ubuntu Linux (17.10 - Artful Aardvark) on his Mac laptop (MacBook Pro?), wifi worked fine "out of the box." I think that's because he was installing Linux via wifi. This time, he used ethernet, and wifi wasn't recognized--no icon, no sign of a driver. Because he was using ethernet, maybe the installer didn't look for wifi? Maybe he didn't "install 3rd party tools"? (I asked him about that, but he was busy being excited that we fixed it. I'll try to remember to ask again.) There were good suggestions on how to fix it out there, but I derived the simplest one:

sudo apt-get install bcmwl-kernel-source

He didn't even have to reboot. His wifi icon just appeared.

For the record, that's "Broadcom 802.11 [wifi] Linux STA wireless driver source."

Thanks to Christopher Berner who got me very close. He was suggesting a series of Debian packages, but the above command installed everything in one swoop.

There are a few questions I have for 4.6 about this. Hopefully I'll get answers tomorrow or later.

Jan 3, 2018

JavaScript drag and drop

I created a JavaScript drag and drop example. I may have done it in JQuery a handful of times, but I don't remember for sure. This is a "raw" JS version--no JQuery or other libraries. I've been thinking about writing a to do list organizer which would use drag and drop. Also, I might use it professionally soon.

new-to-HTML5 semantic elements / tags

Last night, my apprentice Mr. 4.6 showed me these new HTML5 elements / tags. I remember years ago looking for a list of everything that is new in HTML5. I suspect I've at least heard of 75% of it from searching on various stuff, but I did not know about some of those tags. I would hope there is good list by now. Maybe I'll look again or 4.6 will find one.

Dec 24, 2017 - remote MongoDB connections through Robo 3T / ssh port forwarding

A new trick to my Linux book:

ssh -L 27019:127.0.0.1:27017 ubuntu@kwynn.com -i ./*.pem

That forwards local port 27019 to kwynn.com's 27017 (MongoDB), but from kwynn.com's perspective 27017 is a local port (127.0.0.1 / localhost). Thus, I can connect through Robo 3T ("the hard way" / see below) to MongoDB on Kwynn.com without opening up 27017 to the world. In Robo 3T I just treat it like a local connection except 27019. (There is nothing special about 27019. Make it what you want. Thanks to Gökhan Şimşek who gave me this idea / solution / technique in this comment. )

I used this because I am suffering from a variant of the ssh tunneling bug in 3T 1.1. (I solved it. See below.) I think I have a different problem than most report, though. Most people seem to have a problem with encryption. I'm not having that problem because this is what tail -f /var/log/auth.log shows:


I suspect the Deprecated stuff is irrelevant:

Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 16: Deprecated option UsePrivilegeSeparation
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 19: Deprecated option KeyRegenerationInterval
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 20: Deprecated option ServerKeyBits
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 31: Deprecated option RSAAuthentication
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 38: Deprecated option RhostsRSAAuthentication
Dec 24 00:11:12 kwynn.com sshd[18675]: reprocess config line 31: Deprecated option RSAAuthentication
Dec 24 00:11:12 kwynn.com sshd[18675]: reprocess config line 38: Deprecated option RhostsRSAAuthentication
[end deprecated]

Dec 24 00:11:12 kwynn.com sshd[18675]: Accepted publickey for ubuntu from [my local IP address] port 50448 ssh2: RSA SHA256:[30-40 base64 characters]
Dec 24 00:11:12 kwynn.com sshd[18675]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
Dec 24 00:11:12 kwynn.com systemd-logind[960]: New session 284 of user ubuntu.
Dec 24 00:11:12 kwynn.com sshd[18729]: error: connect_to kwynn.com port 27017: failed.
Dec 24 00:11:12 kwynn.com sshd[18729]: Received disconnect from [my local IP address] port 50448:11: Client disconnecting normally
Dec 24 00:11:12 kwynn.com sshd[18729]: Disconnected from user ubuntu [my local IP address] port 50448
Dec 24 00:11:12 kwynn.com sshd[18675]: pam_unix(sshd:session): session closed for user ubuntu
Dec 24 00:11:12 kwynn.com systemd-logind[960]: Removed session 284.

For the record, the error I get is "Cannot establish SSH tunnel (kwynn.com:22). / Error: Resource temporarily unavailable. Failed to create SSH channel. (Error #11)."

This doesn't seem to be an encryption problem, though, because my request is clearly accepted. MongoDB is bonded to 127.0.0.1--internal connections only--but this shouldn't be a problem because based on traceroute my system knows that IT is kwynn.com (It "knows" this in /etc/hosts). It doesn't try routing packets outside the machine.

On the other hand, this won't work in the sense that 3T won't connect:

ssh -L 27019:kwynn.com:27017 ubuntu@kwynn.com -i ./*.pem

Solution

Huh. I just fixed my problem. If I put kwynn.com in /etc/hosts as 127.0.1.1 then 3T won't work through "manual" ssh forwarding (like my command above), even if I forward as 127.0.1.1. If I put kwynn.com in /etc/hosts as 127.0.0.1, 3T works 3 ways: either through the above (127.0.0.1) OR this:

ssh -L 27019:kwynn.com:27017 ubuntu@kwynn.com -i ./*.pem

AND 3T works without my "manual," command ssh port forwarding, through it's own ssh tunnel feature, which solves my original problem. However, I'm glad I learned about ssh port forwarding.

I need to figure out what the difference is between 127.0.1.1 and 0.1. AWS puts the original "name" of the computer in /etc/hosts as 127.0.1.1 by default, and I just read instructions to use 127.0.1.1. Oh well, for another time...

December 21, 2017 - kwynn.com has its first SSL cert, Mongo continued

I'm starting to write around 11:08pm. I'll probably post this to test the link just below, then I should write more.

SSL

Kwynn.com has its first SSL certificate. You can now read this entry or anything else on my site through TLS / SSL. I have not forced SSL, though: there's no automatic redirect or rewrite.

I remember years ago (2007 - 2009??), a group was trying to create a free-as-in-speech-and-beer certificate authority (CA). Now it's done, I've used it, and it's pretty dang cool. Here are some quick tips:

my ssl.conf

Rather than letting certbot mess with your .conf, it should look something like the following. Once the 3 /etc/letsencrypt files have populated with certbot ... certonly, then you're safe to restart Apache.

I included ErrorLog and CustomLog commands to make sure SSL traffic went to the same place as non-SSL traffic.

<VirtualHost *:443>

	ServerName kwynn.com
	ServerAdmin myemail@example.com

	DocumentRoot /blah
	<Directory /blah>
		Require ssl
	</Directory>

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

SSLEngine  on
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/kwynn.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/kwynn.com/privkey.pem
</VirtualHost>

That does NOT force a user to use SSL. "Require" only applies to 443, not 80. If you want to selectively force SSL in PHP (before using cookies, for example), do something like this:

    if (!$_SERVER['HTTPS'] || $_SERVER['HTTPS'] !== 'on') {
		header('Location: https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI']);
		exit(0);
    }

As a critique of the above, perhaps the first term should be (!isset($_SERVER['HTTPS']) but what I have above gets rid of the warning in the Apache error log. I'll try to remember to test this and fix it later.

MongoDB continued -- partial SSL

I started to secure MongoDB with SSL / TLS, but then I noticed the Robo 3T option to use an SSH tunnel. Since one accesses AWS EC2 through an ssh tunnel anyhow, and I want access only for me, there is no need to open MongoDB to the internet. I'd already learned a few things, though, so I'll share them. Note that this is not fully secured because I had not used Let's Encrypt or any other CA yet, and I'm skipping other checks as you'll see. I was just trying to get the minimum to work before I realized I didn't need to continue down this path. See Configure mongod and mongos for TLS/SSL.

cd /etc/ssl/
openssl req -newkey rsa:8096 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key
cat mongodb-cert.key mongodb-cert.crt > mongodb.pem


Then set up the config file as such:

cat /etc/mongodb.conf

storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: true

systemLog:
  logAppend: true

net:
  bindIp: 127.0.0.1
  port:   27017
  ssl:
    mode: requireSSL
    PEMKeyFile: /etc/ssl/mongodb.pem

******
Then the NOT-fully-secure PHP part:

<?php
set_include_path('/opt/composer');
require_once('vendor/autoload.php');

$ctx = stream_context_create(array(
	"ssl" => array(
	    "allow_self_signed" => true,
	    "verify_peer"       => false,
	    "verify_peer_name"  => false,
	    "verify_expiry"     => false
	)
    )
);

$client = new MongoDB\Client("mongodb://localhost:27017", 
				array("ssl" => true), 
				array("context" => $ctx)
		);

$dat = new stdClass();
$dat->version = '2017/12/21 11:01pm EST (GMT -5) America/New_York or Atlanta';
$tab = $client->mytest->stuff;
$tab->insertOne($dat);

Dec 18 - MongoDB (with PHP, etc.)

I started using relational (SQL) databases in 1997. Finally in the last few years, though, I've seen a glimmer of the appeal of OO / schema-less / noSQL / whatever databases such as MongoDB. For the last few months I've been experimenting with Mongo for my personal projects. I'm mostly liking what I'm seeing. I haven't quite "bitten" or become sold, but that's probably coming. I see the appeal of simply inserting an object. On the other hand, I've done at least one query so far that would have been far easier in SQL. (Yes, I know there are SQL-to-Mongo converters, but the one I tried wasn't up to snuff. Perhaps I'll keep looking.)

I've been using Robo 3T (v1.1.1, formerly RoboMongo) as the equivalent of MySQL Workbench. I've liked it a lot. In vaguely related news, I found it interesting that some of the better Mongo-PHP examples I found were on Mongo's site and not PHP's. The PHP site seems rather confused about versions. I'm using the composer PHP-Mongo library. Specifically, the results of "$ composer show -a mongodb/mongodb" are somewhat perplexing, but they include "versions : dev-master, 1.3.x-dev, v1.2.x-dev, 1.2.0 ..." At the MongoDB command line, db.version() == 3.4.7. I don't think Mongo 3.6 comes with Ubuntu 17.10, so I'm not jumping up and down to install "the hard way," although I've installed MDB "the hard way" before.

Mostly I'm writing this because I've been keeping that PHP link in my bookmarks bar for weeks. If I publish it, then I don't need the link there in valuable real estate. Although in a related case I forgot for about 10 minutes that I put my Drupal database timeout fix on my web site. Hopefully I'll remember this next time.

Dec 17, 2017

Today's entry 2 - yet another Google Apps Script / Google Calendar API error and possible Google bug

I solved this before I started the blog and wrote about the other errors below. The error was "TypeError: Cannot find function createAllDayEvent in object Calendar." This was happening when I called "CalendarApp.getCalendarById(SCRIPT_OWNER);" twice within a few lines (milliseconds or less) of each other. The failure rate was something like 10 - 15% until I created the global. The solution is something like this:

var calendarObject_GLOBAL = false;

function createCalendarEntry(summary, dateObject) {
	var event = false;
	event = calendarObject_GLOBAL.createAllDayEvent(summary, dateObject);	
}

calendarObject_GLOBAL = CalendarApp.getCalendarById(SCRIPT_OWNER); // calendar object

createCalendarEntry('meet Bob at Planet Smoothie', dateObject123);

I'm not promising that runs; it's to give you the idea. Heaven forbid I post proprietary code, and there is also the issue of taking the time to simplify the code enough to show my point. I should have apprentices for that (hint, hint).

I was getting errors when I called CalendarApp... both inside and outside the function. I suspect there is a race condition bug in Google's code. We know the hard way how fanatical they are about asynchronicity. Sometimes that's a problem.

Yes, yes. I'm being sarcastic, and I may be wrong in my speculation. I understand the benefit of all async. But isn't part of the purpose of a blog to complain?

Today's entry 1

I just updated my Drupal database connection error article

Dec 6, 2017 - today's entry 2 - fun with cups and Drupal runaway error logs

I just discovered that /var/log/cups was using 40GB. Weeks ago I noticed cups was taking 100% of my CPU (or one core, at least) and writing a LOT of I/O. It was difficult the remove it entirely. The solution was something to the effect of removing not only the "cups" package but the cups-daemon. cups is a Linux printing process. I haven't owned a working printer in about 6 years, and I finally threw the non-working one away within the last year.

I've had the same runway log problem with Drupal writing 1000s of warnings (let alone errors) to "watchdog." It took me a long time to figure out that's why some of my Drupal processes were so slow. It seems that Drupal should simply stop logging errors after a certain number of iterations rather than trash the disk for minutes. If I cared about Drupal, perhaps I would lobby for this, but I have come somewhere close to despising Drupal, but that's another story for another time.

Dec 6, 2017 - fun with systemd private tmp directories

This happens when you just want to use /tmp from Apache, but no, you get something like /tmp/systemd-private-99a5...-systemd-resolved.service-Qz... owned by root and with no non-root permission. (Yes, yes, I have root access. That's not the point.) Worse yet, there are bunch of such systemd directories, so which one are you looking for? Yes, yes, I'm sure there is a way to know that. Also not the point. The point is: please just make it stop!

Solution (for Ubuntu 17.10 Artful Aardvark)

  1. with root permission, open for editing: /etc/systemd/system/multi-user.target.wants/apache2.service
  2. Modify this line from true to false: PrivateTmp=false
  3. run this: sudo systemctl restart apache2.service
  4. I don't think you need to restart apache (see note below), but I'm not sure. I did restart Apache, but I didn't try it without restarting Apache.

Notes

I don't even know if restarting the apache2.service is the same thing as restarting Apache or not. On this point, it is worth noting that sometimes you have to stop going down the rabbit hole, or you may never accomplish what you set out to do. Yes, I should figure out what this systemd stuff is. Yes, I should know if the apache2.service is separate from Apache. One day. Not when I'm trying to get something very simple accomplished, though. Also, yes, I understand the purpose of a root-only private directory under /tmp. Yes, I understand that /tmp is open to all. But none of that is the point of this entry.

If you can't tell, I'm a bit irritated. Sometimes dev is irritating.

For purpose of giving evidence to my night owl cred, I'm about to post at 2:24am "my time" / EST / US Eastern Standard Time / New York time / GMT -5 / UTC -5.

2017, Nov 14 (entry 5)

I did launch with entry 4.

I just took an AWS EC2 / EBS snapshot of an 8GB SSD ("gp2") volume from my Kwynn.com "nano" instance at US-east-1a. With my site running, it took around 8 minutes. The "Progress" showed 0% for 6 - 7 minutes, then briefly showed 74%, then showed "available (100%)." It ran from 2:55:34AM - around 3:03am. My JS ping showed no disruption during this time. CPU showed 0%. I didn't try iotop. (Processing almost certainly takes place almost if not entirely outside of my VM, so 0% CPU makes sense.)

This time seems to vary over the years and perhaps over the course of a day, so I thought I'd provide a data point.

Entry 4 and launch attempt 2

I wrote entries 1 - 3 at the end of October, 2017, but I have not posted this yet. I'm writing this on Friday, November 10 at 7:34pm EST (Atlanta / New York / GMT -4). I mention the time to emphasize my odd hours. See my night owl developer ad.

I'm writing right now because of my night owl company (or less formal association) concept. My potential apprentice whom I codenamed "Mr. 4.6 Hours" has been active the last few days. I'd like to think I'm getting better at the balance between lecturing, showing examples, and leaving him alone and letting him have at it. I think he's making progress, but he's definitely making *me* think and keeping me active. Details are a longer story for another time. Maybe I'll post some of my sample code and, eventually, his code.

He's not around tonight, and I miss the activity. As I said in the ad, I'd like to get to the point that I always have a "green dot" on Google Chat / Hangouts or whatever system we wind up agreeing on.

Based on the last few days, I have a better idea of how to word my ad and the exchange I want with apprentices. Perhaps I'll write that out soon.

dev rules 1 and 2

Rules 1 and 2 and in entries 1 and 3, respectively, below.

Rules 3 and 4 are way "above" / later.

Entry 3: dev rule #2

My first GAS and perhaps the 2nd, if it is indeed a server problem, bring up my rule #2:

Kwynn's software dev rule #2: always host applications on a site where you have root access and otherwise a virtual machine--something you have near-total control over. It should be hard to distinguish your control of the computer sitting next to you versus your host.

Amazon Web Services (AWS) meets my definition. AWS is perhaps one of the greatest "products" I've ever come across. It does its job splendidly. When they put the word "elastic" (meaning "flexible") in many of their products, they mean it.

Others come close. I used Linode a little bit; it's decent. I have reason to believe Rackspace comes close. I am pretty sure that neither of them, though, allow you to lease (32-bit) IP addresses like AWS does. I am reasonable sure getting a 2nd IP address with Linode or Rackspace is a chore--meaning ~$30 and / or human intervention is involved, and / or a delay. With Amazon, a 2nd IP address takes moments and is free as long as you attach it to an (EC2) instance.

This rule is less absolute than #1. Violating always leads to frustration, though, and wasted time. Whether the wasted time is made up for by the alleged benefits of non-root hosts is a question, but I tend to think not. I've been frustrated to the point of ill health, though--one of the very few times I've *ever* been sick. That's a story for another time, though.

If it's not clear, using GAS violates the rule because of the situation where there is nothing you can do. I had some who-knows-the-cause problems with AWS in late 2010, but I've never had a problem since. If, heaven forbid, I did have a problem, I could rebuild my site in another Amazon "availability zone" pretty quickly. As opposed to just being out of luck with GAS.

Why I violate the rule with GAS is another story, perhaps for another time. I'll just say that if it were just me, I'd probably avoid GAS. With that said, some time I should more specifically praise some features of GAS as it applies to creating a Google Doc. I was impressed because given the business logic limitations I was working with, GAS was likely easier than other methods.

Entry 2: Google Apps Script and StackOverflow.com

I've been considering a blog for months if not years. I finally started because of this problem I'm about to write about.

This blog entry deals with both the specific problem and a more general problem.

The specific problem was, in Google Apps Script (GAS), "Server error occurred. Please try saving the project again". The exact context doesn't really matter because if you come across the problem, you know the context.

I spent about an hour chasing my tail around trying variations and otherwise debugging. At some point I tried to find info on Google itself. Google referred "us" to StackOverflow.com (SO) with the [google-apps-script] label. Google declares that to be the official trouble forum. As it turned out, someone else was having the same problem. I joined SO in order to respond. Then roughly 4 others joined in. We were all having the same problem, and nothing we tried fixed it. I am 99% sure it was a Google server problem and there was nothing we could do. The problem continued during that night. Then I was inactive for ~14 hours. By then, everything worked.

The more general problem I wanted to address is the way SO's algorithms handled this. The original post and my response are still there several weeks later. However, others' perfectly valid responses were removed. To this day, SO still says, "Because [this question] has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site..."

This sort of algorithmic failure troubles me. I'd like the memory of those deleted posts on the record.

I was motivated to write about this because I encounted another GAS error a few hours ago that I once again suspect is a server error. This time, I was the one who started the thread. 2 hours later, no one has answered. I'm curious how this turns out. I'm not linking to the thread because it's still possible I caused the problem. Also, I'm not linking to it because Google almost immediately indexed it, so SO is the appropriate place to go.

Entry 1: dev rule #1

Kwynn's Software Dev Rule #1: Never develop without a debugger. You will come to regret it. To clarify terms, by "debugger," I mean a GUI-based tool to set code breakpoints, watch variables, etc. Google Chrome Developer Tools "Sources" tab is a debugger for client-side JavaScript. Netbeans with Xdebug is a debugger for PHP. Netbeans will also work with Node.js and Python.

It is tempting to violate this rule because you think "Oh, I'll figure it out in another few minutes."

Another statement of this rule is "If you're 'debugging' with console.log or print or echo, you're in big trouble."

page history

HTML5 valid