home

tech blog

HTML5 valid(my copy of the validator)
HTML5 valid(W3C copy)

pinned

apprentice to do

introductory notes:

1. For time zone purposes, I am near Atlanta and use my local timezone. Atlanta is the same as New York.

2. Drafts / previous versions of this page are on GitHub. Usually the live version is way ahead of GitHub, but sometimes vice versa. 3. As of mid-2024, it's as not true anymore, but, before, I was often posting to GitHub and not commenting here. If you click on my repos, they are sorted by the latest changes. 4. The line between my personal blog and this is becoming thin.

2024

Sep 24

alleged humans who can't pass a 1983 robot test

This is in part an exercise or thought experiment for potential apprentices.

So I posted an ad to a site with an anonymizing email relay. That is, email back and forth does not reveal one's real address, in either direction.

I posted on the early morning of the 17th. Four "people" have responded. The first response was 7 hours later. It starts with "I don't know if you're still looking." Who finds a room in 7 hours, mostly during the middle of the night? There is no indication that "he" has read *my* ad specifically. Totally generic. A robot I wrote in 1983 when I was 9 could have sent that email; I'll come back to that. (Codename officer rank--the alleged name.)

Everyone wants to use text or phone. A large portion of the point of the site is that anonymized address. I would love to know what the market rate is for valid phone numbers. Why should I assume these "people" are anything other than "people" seeking phone numbers? (I'm assuming the market rate only works for people in Third World countries, but it's not zero.)

3 hours later, at noon, response #2. "Hi there[. two newlines] Still looking for a place?" I mean, I guess that's a bit more plausible, but is it? That is the entirety of the message. Can't tell from a 1983 robot. ("Anthony")

3.5 days after I posted, codename European country (#3). "Hello, I just got your number from [site] are you still looking for room I have room available if you're interested text me back asap." That's the entirety. Robot! Or I can't tell otherwise. And he damn well better not have gotten my number. He's flipping emailing me!

#4, 4.5 days after my post. "David." "Interested in talking. Leave contact number." That's it. See a trend?!

My response was probably too sarcastic. Mind you that I NEVER, EVER do this when I reply to ads. Fat lot of good it's done me. This stuff ticks me off.

I can understand wanting to wait a round before spending time. All I need is one sentence that indicates a human at the other end.

If you are real, I don't mean to be sarcastic, but I believe you're my 5th reply. I could have written a bot in 1983 when I was 9 that could send all of these generic emails. None of them are responsive to my ad specifically. Again, that takes one sentence.

His reply, "Good morning. I'm for real! Sent from my iPhone" Does ANYONE SEE A TREND!!! Yes, my 1983 robot could determine that it was morning. It could pick out "human" and "real" and reply with "I'm for real!" What is WRONG with people! Hopefully my potential apprentices see the problem with his response.

(It looks like I only got 4 replies. I thought it was 5.)

I could go on, but I'll try to stop.

Sept 20

AWS downgrade for "political" reasons

I got this 3 days ago from an AWS recruiter. It was in his email signature. It's the typical Amazon penis logo in rainbow colors alongside a black power fist.

Above these logos, the text is "Work hard. Have fun. Make history." Below is "Amazon is an Equal Opportunity Employer – Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age."

I post the image above / below to condemn it!

Amazon rainbow penis logo and black power logo

I post the image above / below to condemn it!

My response is below. Perhaps the year should have been 1922 or something. I didn't give the response tremendous thought. Perhaps I'm being way, way too polite, or perhaps not.

Washington State is too politically dangerous. It's not necessarily as bad as moving to the USSR in 1920, but it's close to that.

Similarly, I am very sad to see your rainbow Amazon logo and black power logo. I have been a happy AWS customer for 14 years, but seeing such logos makes me question whether it's time to move.

Meanwhile, American Cloud has a less expensive server option. I pay about $15 / month for AWS for this site. American Cloud is down to $22 a month (3 cents / hour) for a comparable-to-superior offering. The $7 doesn't bother me. I am still too close to homelessness and starvation, though. I am hesitant to spend the energy on a move.

September 13

a rant on pagination and its implications for dev generally (in this case in Next.js / React.js)

I just confirmed something that I'd feared. Many web sites have a very standard pagination format. In this case, our client has roughly 5,000 active customers. Standard pagination is to show them 10 at a time or sometimes to have a "10 / 50 / 200" option for customers per page. I'll come back to this.

I'm looking at my law office code. Ironically, I've committed a similar sin in that I've failed to add a newline after every </tr>. I almost started up the billing clock to fix that because I consider that so bad that it should be fixed. Then I realized that I'll be converting the whole timecard system in the next few months--converting it from a wretched Drupal "field collection" format into my own MongoDB format.

Rather than paginating, I'm listing roughly 9 years worth of timecards. I can't easily count them because without the newlines, Firefox is having trouble processing the text. (I'm not certain newlines would improve that, but probably.) Without some effort, Firefox only tells me there are "over 1,000" opening trs. I'm trying to scroll through them to get to the bottom and see if FF will actual count them. In any event, it's probably nearly 3,000 rows, and they pop up instantly on my local server (which is what I'm using for comparison).

Poor Firefox. I might fix that if I'm "close to" that particular code. I used the database, and there are almost 3,500 timecards. I haven't had to worry about pagination because they pop up immediately. I'm using PHP and Apache, and I'm almost certainly generating the HTML on the server-side. (That's a very good guess / memory. I'm going to resist looking at my code.)

In any event, I was curious what would happen on the "new" project if I tried to show all 5,000 customers. The answer is that it took varying lengths of time. The longest may have been 20 seconds. I'm not timing it. The point is that it's way too long.

One point I want to make is that this means the common pagination process is necessary because the framework is so crappy that it takes an insane amount of time to render. And this is the latest and greatest stuff that I hear about over and over: React.js. It's such a low standard. We know for a fact that Indians are involved.

I think I've mentioned it before, but I got reliable data from two of the big telecom companies. The expert opinion was that 2 - 3 out of 30 Indian devs are worth a darn.

Because I am to a degree a "political" commentator, I will flesh this out again. Big Evil Corporatia will try to claim that Americans don't have the right education. I'm assuming that Indian degrees are heavily subsidized by the government, and that they are otherwise cheap in every sense of the word.

I will try to reign in this rant, but the short version is that Big Evil Corporatia is just that. They are traitors. They are engaged in economic, psychological, and chemical warfare against Americans. Economic warfare in terms of manipulating the economy to increase cost of living. Psychological in terms of DIE WHITEY (DEI). Chemical in terms of "vaccines," which I classify as chemical warfare more so than biological. Importing millions of Indians is another form of treason. I suppose that's genetic warfare in terms of paving the way to eventually eliminating whites. Of course, they don't want to eliminate whites; they want to find new ways to drain more and more of their energy. (As a half-Jew, I'm nowhere near fully white, so I won't include myself.)

The fact that the latest and greatest coding method (React.js) takes many seconds to render 5,000 customers is an indication of this evil. The Indians consider it standard, and the fact that the Indians set the standard.... (Trying to stop.)

September 11

a new-to-me regex (applied to Wordle)

Spoiler alert: this involves the answer to today's Wordle.

The J-- York Slimes wishes to exceed the (per capita) body count of Lenin, Stalin, and Mao. I hope to see the corporate entity abolished / seized and several of its owners, editors, and reporters tried and executed for mass murder, psychological warfare, treason, etc. People trying to convince you to torture yourself to death do not have free speech rights. (Death by vaccine, etc.)

The situation is similar to using Big Evil Goo Mail, Satan's Code Repos (GitHub), Goo Tube, and such. If I can get some entertainment and fun out of them, I'll grudgingly do it. Thus, I play Wordle.

I have very fond memories of Mastermind as a child. I wrote an implementation of it in C for the SatanSoft command line in ~1996. Wordle has the same rules used on 5 letter words rather than colored pegs.

A few years ago I calculated the most effective starting words. "Raise" and "arise" are equally effective. "Effective" means that they eliminate the most words. Specifically, they eliminate 2,142 out of the 2,309 words I had in my possible list at the time. I think I've seen 3 words beyond those in the last few years of play.

Today I started with "arise." I most certainly need a raise, but I think of Dr. Frankenstein, "Arise!" I usually can't resist "arise." I think "raise" was a word months ago, but I haven't hit "arise."

'A' was in the right place (green), 'R' was not in the word, I and S were in the word but wrong place, and E was in the right place. By shifting "is" to the left, I quickly guessed "aisle" and was correct. Sometimes I grep the word list to see what was possible after each guess. In this case, only "aisle" was possible.

Today I learned of a more pithy regex to use:

grep -P "(?=a[^r][^ri][^rs]e)(?=(.*[si]){2})" /var/kwynn/wordle_list_2309_words.txt

where the that list is in GitHub (Satan's Code Repos). Today that resulted in "aisle" as the only result.

August 24

posted to the Gab Linux board

For background, my intro to this board was 6 (of my) Gabs ago on May 28. I would like to think I'm a foaming-at-the-mouth Linux fanatic.

I often mention my apprentice offer on Gab, but I don't often reach out to peer developers / sysadmins / etc. I've never quite made freelance development work, but I've gotten close, and I still think it's worth redoubling my efforts and getting help and making it work. Please let me know if you're interested in joining my quixotic quest. I've made variants of this request dozens of times on Gab, so you'll find plenty of info.

In somewhat related news, Gab users rarely send PMs versus likes and whatnot. I'm looking for pen (keyboard) pals and such. I need more people to "talk" to, especially during the night (Eastern Time). Within my megabytes of hand-written HTML on my website (easy to find by tracing my Gab history), if you find anything to talk about, please ping me by some means or another.

longer version (maybe, eventually)

I'll posted the above around 1:48am. I started the following before I posted. I may continue.

I often mention my apprentice offer, but I forget to reach out to peer developers / sysadmins. Some background is in order. Years before Big Evil Tech was obviously such, I was already pulling away from the perpendicular ("perp") economy. (This being parallel, at least in theory.) I'll post this around 2am my time (EDT) on a Saturday morning, which gives you an idea of my challenge. If I'm being sarcastic, I call myself the King of the Night Owls, the Andals, the First Men, Gondor, Arnor, and Riva.

For years freelancing has not quite worked financially, mostly because I don't have the social skills for sales and related stuff.

August 10

apprentice to-do

As an example of a linked list data structure, this was the previous entry (a few weeks ago). I think there is another email to add, too. In any event, this time I'll include his email below. Hopefully I can answer it some week or month or another. Not today, or least not now. It's a good set of questions.

Regarding "true understanding" of software development

Aug 10, 2024, 12:34 PM

Hey Kwynn,

I had another thing I wanted to bring by you.

I still feel like a junior developer, even though I’ve been at this for four years.

At what point did software development "click" for you, and what led you to that point?

I have "JavaScript Brain" and so I find I fall on my face when trying to actually solve any meaningful software development problem outside of "make my website do X basic things"

I know Leetcode is rather specific, but I have difficulties figuring out how to solve the most basic LC problems, and I feel it’s because I don't understand how to "problem solve" in actual software development.

I’m feeling rather lost, and I hope you can help me out perhaps by giving me some anecdotal advice or something.

August 7

First posted about 2:12am and then another note at 2:15+. Then another at 2:18am.

SSL / HTTPS problems

The browser: "Secure Connection Failed. An error occurred during a connection to blah.example.com. SSL received a record that exceeded the maximum permissible length. Error code: SSL_ERROR_RX_RECORD_TOO_LONG. The page you are trying to view cannot be shown because the authenticity of the received data could not be verified."

"curl https://blah.example.com" Result: "curl: (35) OpenSSL/3.0.13: error:0A00010B:SSL routines::wrong version number"

As I traced through my Apache sites-available, I found that the letsencrypt certificate entries in /etc/letsencrypt are simply gone. I have no idea where they went. I'll put my IP address in my DNS entry, open up ufw, and run certbot again.

I've found that a popular ISP seems to block incoming HTTP even for IPv6. In this case, I'll be trying to run certbot. When you pull up the router in a browser, it's the dumbest interface I've ever seen. Dumb in the sense that there are no options. My solution is to use my cell phone hotspot and create AAAA records (IPv6).

Maybe it vanished because I let it expire for a few weeks. (It's just a test system.)

August 4

Android notes

  • There are 2 build.gradle.kts - one is under app and the other is a level above. The one under app is more interesting to me.
  • It seems that minSdk is what one may want to change in the above file. Changing the others causes grief. I changed minSdk to 21 and just got a simple app to run in Android 5.0, which is really ancient
  • I have compileSdk and targetSdk set to 34, and minSdk set to 21. This gets me what I want without dependency grief. "What I want" is to run on Android 5.0.
  • If you change one or more of the Sdk options, you may need to remove the contents of ~/.gradle/caches to force a re-download. If you do that, make sure Android Studio is closed; otherwise you get some weird errors because you deleted the data from "underneath" a running program. Re-opening Studio seemed to fix that problem.

July 31

The account manager entry for my Lifelog friend is yesterday's.

follow-up to the fake job

The initial entry was on July 10. The malicious domain name was created on July 8, and the whois shows an update on July 11 and a "Domain Status: clientHold." There is a link to ICANN which in turn says, "This status code tells your domain's registry to not activate your domain in the DNS and as a consequence, it will not resolve. It is an uncommon status that is usually enacted during legal disputes, non-payment, or when your domain is subject to deletion."

Before, the site went to a parking "lot"--a generic non-site. Now the browser simply says that it can't be found. It would appear that "the community" got on the case fairly quickly. If I didn't mention it, I emailed the real company. I didn't get a reply. When my friend called the real company, they hinted that they'd gotten other calls. I say "hinted" because the call was very short, and my friend did not clarify.

Ideally, I would have liked to have pounded hard on this situation, but I was one removed from the action. Also, I'm having my own job hunting problems.

I thought to look up the domain a few minutes ago. That ICANN status is very interesting. Seemed worth mentioning.

July 30, 2024

an apprentice / business partner request

My Lifelog foray got me back in touch with someone I proposed working with a few years ago. We're going to try again, at least it looks like it.

She gets first dibs at this; it's not like there is a line. Actually, there are potentially two people, so there is a line. In any event, I want to make this public because it's the sort of thing I'm looking for generally.

Back in December and January I did a few little projects for a client. I fixed a nearly completely broken WordPress site, and I fixed some DNS settings on another site that was completely broken because the DNS was pointing to the wrong place. I should mention that his diagnosis of the problem was completely wrong. It would seem that I am slowly learning not to put too much stock in those who know "just enough to be dangerous." I've found out the hard way what the term means.

He offered me more work, but it was as much managerial as technical, and I'm leery of that. I'm having enough trouble connecting with people such that I don't need to push myself to deal with them more. The important part is that he wanted to continue the business relationship.

However, he fit a personality type I've dealt with before, so I let the communication die. He approached me again several months later, though. For a number of reasons, I didn't get back to him. (I'm not at all certain that he still has work for me; I address that below.) I was leery of continuing in part because he absolutely insists on talking on the phone when the information is easy to convey in writing. The related problem to that is the he is on Eastern Time and keeps dawn-supremacist hours. It's hard to get me to talk during that time. During the summer, I'm barely waking up at close of business.

When I emailed him he would go so far as to say "I need to talk to you" or something. And then, when we did talk, it wasn't productive. I dealt with that sort of personality years ago, and it led to a total communication breakdown, and that project simply stalled out. I don't want to set myself up for that. I simply don't have the personality to deal with such a different personality for the long-term. At least, I won't do it when I'm this frayed and already increasingly irritated at most people. If I were financially stable, I might find the peace of mind to deal with him.

So, I asked my renewed-Lifelog correspondent if she'd like to play the role of account manager. As I remember, I talked to that client as late as 7pm, and he would contact me at 9 or 10am. So, the account manager would need to talk-talk to him sometimes during those hours.

In her case, she might be able to do the work, which is fine with me. Ideally I'd like a sales commission, but creating business within my network is the goal, whether I am immediately paid or not.

I should add that he was perfectly pleasant to talk to, but it's just too difficult for me to keep up with that sort of communication.

Then there is the question how to make introductions. I may or may not want to first confirm whether he still has work. I suppose that depends on my potential account manager's sales personality. What do you think? He knows that I don't like to work during "normal" hours, although it seems that almost no one can really get that through their heads. It's so alien that they can't process it. So it would probably be fine to explain that I'm interested in doing work for him, but I'm going to need a business hours intermediary.

Hopefully that gets the point across. I'm not going to polish this entry because I'm hoping she can read it sooner than later.

July 28, 2024

back to 40 bit net prefixes

Per my entries from the last week or so, despite my attempts to eek out a few fewer bits, I've had to go to 3 X 40 bit prefixes. (Fewer bits rather than more because I'm trying to restrict the possibilities.) I wonder if more analysis would show patterns I could use to better restrict the ranges.

July 20

Null change. I accidentally changed this file rather than my personal blog. As of 21:30.

July 18

New entries going up through 18:18 and beyond, in no order.

newly pinned - apprentice to dos

"HTMX & Go" email, July 16. Note to self to get back to this. Hopefully he'll be flattered that I post it here. For reasons I won't exhaustively get into, I still need more apprentices. For one, no one is covering the night shift.

IPv6 ranges revision 1

Regarding a recent entry, I had to expand to one of the 40 bit subnets. I am using the ISP service somewhat differently, so it's interesting that one thing led to another. Specifically, I went from 43 bits to 40. That is precisely one of the Atlanta ranges listed by the ISP; they list that one as 40 bits.

Open Metaverse - formerly pinned

I was all excited about this some time ago. I'll have to re-evaluate it some time. I shouldn't leave something pinned if it fades into memory.

July 17

IP ranges for AWS security groups

An introductory comment: IPv6 addresses are 128 bits, but ISPs usually assign a 64 bit subnet to a house. Thus, all I care about are the high-order 64 bits. I tacked this on as the last thing I wrote, so I repeat that just below (and again, I think).

I am in a situation where my IPv6 address is changing unusually often. Up until very recently, I could define my ssh (port 22) range to 64 bits. ISPs usually assign a 64 bit subnet to a customer. Thus, it's exactly the same security as one 32 bit address assigned to a house. I could use that range for months. Now it would appear that I cannot get away with that. So, first I did a (Linux command) "whois" on one of the addresses. This led me to an exhaustive list of geo-coded IP ranges that my ISP uses. I'll come back to this.

At the other end, I had 7 X 64 bit ranges in AWS. I took these for study and sorted them. First of all, thankfully, those 7 were in fact in the Atlanta ranges.

Later, I copied /var/log/auth.log and auth.log.1 and the rest of them to a /tmp directory. I unzipped the .gz versions. Then I ran


cat * | grep -oP "abcd\:[a-f0-9:]+" | sort -u

Substitute for abcd your relevant prefix. This led to 12 addresses in 3 ranges since this situation began.

In the ISP list, grep -i Atlanta *.txt | grep ab | wc showed 62 ranges. 16 of those were all possible variants of a 44 bit range, so those 16 become 1. Now we're down to 47 ranges. That one range was relevant to me. That is, my addresses had been in that range. Based on the addresses I had, 44 bits was the best I could do for that range.

42 of those ranges I have not encountered. That is, I've never had those addresses. Now we're down to 5 ranges, with one relevant to me (so far). One range was an entire 64 bits, which is really weird because that's what a house or company is assigned. So now we're down to 4 to look at / 1 relevant to me. One of the ranges had a very high order bit difference. I had not encountered it. So I'm down to 3 relevant ranges. The remaining two ranges were expressed as 40 bits in the ISP list, but so far I can whittle them down to 42 and 43 based on what I've been assigned so far.

So, our grand total for the night (early morning, well before astronomical twilight) is one each of a 44, 43, and 42 bit range. Given that ranges are assigned as 64 bit, we can subtract from 64. (What I'm calling "ranges" I believe are "subnets," but I sometimes get that backwards.) So, 20, 21, and 22 bit address blocks / ranges. That's only 7.3 million address ranges. Relative to the 64 bit or 1019 possibilities, I'd say that's not bad for less than 2 hours work.

It was almost 2 hours because I was grinding through the bits from the AWS security group rules and then threw that out for auth.log and then started messing again with bits on a virtual programming calculator. I was also filtering the ISP's list various ways. As with many things, this should get quicker next time.

July 16

to a fraction of potential clients: show me your effin' code

Once again, this is why I should not be freelancing on my own. I need a minder. Maybe after I have ranted I can respond to this guy. I may be wrong in some or most cases about the following, but this is what it feels like.

This is what I think are in some potential clients' heads. In the current case, I am way exaggerating. In some cases, I am not.

A video call with screen share and cameras on both of us is the only possible way to convey information at the beginning of a project. Rather than calling for Fauci's execution, Zoom has set this standard forever more. Zoom is our hero. Zoom is our god. All hail Zoom! Put on your makeup for the camera, Kwynn. Oh, you identify as male? That doesn't matter. In transgender world, everyone wears makeup. This video call will be a beauty pageant first and about technology somewhere distantly on the list. Show me your sunny personality because that is highly relevant to coding! Smile! Oh, and you have to give me a quote within the first 30 minutes of the call.

This is what's in my head. I guess I don't need to block quote myself in this context. In this situation, the HTML in question is likely going on a public website, so it's not sensitive. I would really like some code so that I understand exactly what needs doing, and I can give a perfectly relevant demonstration. I figure it will take me less than 30 minute to mock up the simplest demo of what he wants. Even if it takes longer, I am curious enough to fiddle with it. He's passed my writing tests. I have his phone number, I think, or I can find it. In technical terms, we're communicating well enough. Technically, we're making progress, so I'm willing to spend some time fiddling with the tech. I will know a LOT more in not very many minutes. Then I can more intelligently talk about money. The possibility that we won't come to terms on money is very unlikely.

Ok. This is working. I'm calming down.

July 15

First post after 02:26. I'm going to keep whittling at this at 03:02 and beyond.

yet another take at my salesman / apprentice ad

the ad proper - version 1

At a minimum, this is an unpaid software apprentice offer. The good news is that I have mad software skills, and that's the potential asset. I need sales help--both direct and indirect. That is, if you have the people skills for sales, great. If you want to help me with sales indirectly, that would be better than what I'm doing now. One example of indirect help would be picking out ads for me to answer.

This is not a wage (or salary) job in the sense that I have zero money on hand. Some of the bad news is that I am a committed (genetically hard-wired) night owl, so that limits what I can do. Also, I lack the social skills to successfully freelance. Thus, I need help bringing in business.

I would not call my offer exactly commission-only. You are welcome to be in charge of the business side and collect the money. So I suppose I can call this an entrepreneurial venture with zero initial money. With that said, the asset is a good asset. Making some money with little work should be doable. Making a side income with more work may be doable. Also, my imagination is probably stunted on this topic. That's one of many reasons why I need help.

sales ad - discussion / more

Some stuff I removed regarding indirect sales help: I need a sounding board. *ANYONE* else's insight would be better than mine when it comes to sales and people. It would be helpful to keep track of all the sales in the works. I suppose we could use open-source CRM software or better yet write some.

As you scroll up and down and follow links, you'll find my site is vast. It's both ways to study the asset (me) and all the ways I've pissed off some potential clients.

My initial goal is to make about $1,200 a month, where I understand that '$' does not refer to real US dollars. Only gold and silver are real US dollars, of course. My initial offer would be that anything I make above that goes to you. If we make $3,000 and you get $1,800, that is great for a month or two or more. As time goes on, I'd want to adjust that and hopefully we can both make more. I suppose it's possible to make a LOT more.

I may leave it that and quit the business realm for the night / early morning / nearing the end of my waking period.

some more notes just before I posted the first ad

It looks like I may have condensed the following pretty well. The following is the longer form scratch pad. It also has a bit more info.

The context of this is a "jobs" board. I have always hesitated to call this a job. The potentially good or great news is that I have mad software and sysadmin skills. The least I can offer is help with learning software. Financially, the bad news is a somewhat long list. Ideally, though, my bad news is your opportunity. One piece of bad news is that I am quite the night owl. My ideal time to start "doing business" is roughly an hour after sunset. The other bad news is that I get increasingly fed up with people for increasing reasons. I am hesitant to talk to strangers. For one, I've set myself up to get burned too many times. I don't trust my people skills to keep me away from being getting screwed. As for video calling, I rant at great length about that below; the short version is that I am a software developer, not an actor.

I have tried freelancing, and it has just barely not paid the bills. If I had help on the people end, that might be all it takes for both of us to make some money.

I have no budget to pay any regular wage. I would not exactly call this commission-only, though. It's more like you are welcome to be in charge of the operation and collect the money and then pay me. So it's more like an entrepreneurial venture with zero initial money.

There is also the option of helping me get "real jobs." That would be worth several $10,000s as a fraction of my pay for some period of time. I would formalize that agreement if we got that far. I have no idea how probable that path is. I've tried it to some degree. I can elaborate if needed.

Of all places, I've had some success on Craig's List computer gigs in various cities. I still think that could work if we could keep at the replies. I generally run out of energy on that fairly quickly. I can only answer so many ads in one "day" / 24 hour period. Maybe if we could be consistent, that's all it would take.

Reddit has a r/forhire that shows some promise, but I'm not willing to get Reddit karma until I see some signs of life, so it's catch-22. Also, I assume Reddit is somewhat as evil as DARPA Lifelog ("Facebook") and such, so it's not an excited prospect.

As for the various gig sites--same thing: a catch-22. I'm not sure I can be motivated to create profiles and such until I see that it can work. I think my profile on one of those is complete, though. A friend motivated me just long enough.

update on July 18

I always try to link back to previous versions of this ad.

July 10

Is there a difference between a scam hiring process and a "real" one?

In the last two days I've watched a friend go through a scam hiring process. This is a scam in the sense that the one goal is probably identity theft. It makes me sick to cite an Evil Empire source, but they had about all of the points my friend experienced, and they have a large data set, and I could spot the scam without their help, so I suppose it's backing.

That's the Federal (destroyers of) Trade Commission. I feel safe in assuming they spend 99.8% of their time on destruction and spend 0.2% to make it look like they are working for the good.

In any event, they included the fact that the scammers use Satan's Job Board, known to the dawn-supremacist, illiterate, Zoom-ing, shod, masked, vaxxed zombie horde as LinkedIn. LinkedIn is thus party to the scam. LinkedIn likes to natter about pUkraine and priDE-MONth. They cheer along as companies require blood clot shots. I assume they required one for their own employers; maybe they still do. They are otherwise dedicated to evil in at least several other ways. Why would they possibly spend their time policing these ads? If I can tell in a few minutes that it's fake, can't their "AI"?

The domain name of the fake recruiter's email was created for the first time ever two days ago. The FTC mentioned personal addresses, so this scam was slightly more sophisticated. The domain name was a variant of a real company. (Where "real" is ambiguous in my mind, of course.) There was nothing at the web site of the domain name, or I should say it was the default that a lot of hosts use: very generic links with no reference to the company.

The scammers got a copy of my friend's driver's license. The scam supposedly would go on to get her to pay for equipment. They hadn't gotten that far.

She talked to the "recruiter" on the phone for about 10 minutes. My best guess is that the recruiter was hired a few days ago from one of the gig sites. I suspect she does not know she's involved in a scam. The recruiter is American based on accent and probably white based on a few factors. Thus, again, my best guess is that she doesn't know she's involved in a scam. I wonder how long it will take her to figure it out, though. What will she do then?

The PDFs used the name and real snail address of a real company. Oddly, the PDFs didn't refer to any domain names. That was one of my clues.

They "hired" her in less than one business day. That was another clue. The hourly rate was also suspiciously high given the entire context. It was $65 / hour. That would not necessarily be suspicious for a junior software engineer, but there are other factors I don't want to get into because it would take too much work to obfuscate around the specific situation.

When I wrote this, my plan was to go on about how this is so similar to the "real" process that I can't tell the difference. I'm going to try to stop, though.

The job hunting process has been something near horrific for me, so I have my own emotional issues as I see her suffer through this. She had already put her 2 weeks notice in with her current job. She had time before close of business, though, to reverse that. She got really, really excited, so that adds to my woe. I fear this is going to damage my productivity for the day. I am bummed.

July 9, 2024

very irritating difference between mongosh and mongo

I am executing a file containing the following query. The shell_exec() command is

mongosh  --quiet   /tmp/kwqeq10_2021_9ddeee941d9265e105411ffd55d645b2_k.js

The magic query looks like this:


print(
	EJSON.stringify(
		db.getSiblingDB('mydbName').getCollection('mycollection').find().toArray()
	)
)

A known-to-be-working example:


print(EJSON.stringify(db.getSiblingDB('test').getCollection('my_arbitrary_collection_name').find().toArray()))   

The key to this is that the JSON result has double quotation marks / double quotes around the keys / properties (see results below). These are parse-able by many languages.

[
    {
        "_id": {
            "$oid": "668cec9babdd6dc6cc59daf6"
        },
        "someData1": 1,
        "moreData2": "2 blah"
    },
    {
        "_id": {
            "$oid": "668ceccbabdd6dc6cc59daf7"
        },
        "otherData": "blah blah blah",
        "foo": "bar"
    }
]

Here is the example code.

June 21 - understanding binary

I have a new apprentice. I have hope that we'll work together for months and possibly years to come. She's already built up 50 karma points in case we start trying to use Reddit to get gigs.

She's taking online classes at one of those private online schools for a BS in CS. There are a handful of such schools in that category I've heard of over the years. I won't name the school because I might wind up mocking them. They go in the category of institution that I wonder if it will do her any good. I tend to think college is generally not worth it even for CS, but that's not my topic right now.

Right now she's taking a Linux sysadmin class, which in theory is great. In practice, there are a bunch of things that annoy me, which is also not my topic now. In any event, she was learning chmod. I was trying to explain the permission bits, such as chmod 755 === rwxr-xr-x .

She quickly mentioned that she does not understand binary at all. I tried to explain that decimal 567 === 5 X 10^2 + 6 X 10^1 + 7 X 10^0, and thus try to explain that binary works the same way. So far, I'm not getting anywhere, but we've only spent a few minutes on it so far. She asked me, "If I don't understand this, am I not cut out to be a programmer?"

I've been pondering that question. I eventually answer it below. For now, though, rather than answer it directly, I'll say this. Forget binary for the moment. It would be a very good idea to start to think about the decimal system that we take for granted, and what it means. As above, what does 567 really mean? Base 10 (decimal) only has the digits 0 - 9. Thus, 567 or 56 or 10 is a different beast. The number "10" is an abstraction in a way that "9" is not. "9" represents something that you can easily visualize--9 objects. One can easily visualize 10 objects, but 10 is a 1 and a 0. It's an abstract usage of 1 and 0. For that matter, 4,598,839 cannot be visualized, but it can be manipulated because the number means something very specific, and their are algorithms to process the number. I mentioned that Roman numerals are very crude and cumbersome because they are not a base system. They do not have a zero. Zero is very important.

It would be great to understand bases, the importance of zero, what 567 or 5678 means, etc. It goes to abstraction and business analysis. That is, when figuring out how to code something, you have to really understand how it works.

I will encourage her to first analyze decimal and really understand what's going on. The way we do addition by hand, for example, is an algorithm. There are reasons why carrying digits over to the next decimal place works. In other words, why does stuff she learned in 4th - 8th grade work? What is really going on? (I think I understood addition when I was a child, but it took me some time longer to think through why long division works. That is, I didn't fully analyze the long division algorithm for a while.)

Computers add and divide in similar ways to how we do it by hand. Again, it's an algorithm.

Once she understands decimal consciously, then binary should be relatively easy.

I don't care if it takes her months. Her circumstances are such that getting a clear head for any length of time is problematic, so it might take months.

If we come at it from different directions over the course of months, and she's still not getting anywhere in her understanding, then the answer may be that we have to start questioning whether she's cut out for programming. As best I understand, some programmers go through their whole careers writing from specifications. I frown heavily upon such things. A programmer should be able to do business analysis / requirements gathering. I should add that I've almost never written from specifications, even at medium-sized software companies (for various reasons).

Hopefully that will do for a start.

June 10

yet another rant about another instance of the general store problem

My frustration is NOT directed at my co-developer. He's relatively new to dev and is going with the overwhelming general wisdom.

To recap, the general store problem refers to a Steven Wright joke to the effect of "I went to the general store, but I couldn't buy anything specific." In the past I've applied that to Drupal and WordPest and to some degree Magento and Angular. This time it's Next.js. I'm not even sure this moment how Next and React related to each other, or whether they do. Right this moment I don't care. Whatever is plaguing me runs as "npm run dev", so it's NodeJS, and I'm nearly certain it's Next.

Once again, this stuff is the "latest and greatest" "powerful," "generalized" system that has all its developers singing huzzahs and hallalulah and Kum ba yah. Once again, I'm irritated at best, and I'm tempted to run around in a circle waving my arms and shrieking.

I'll try to take my irritants in order. The idea was to use the React (or Next or something) version of Big Evil Goo(gle) Maps. (As much as two of us involved hate Google, we have come to the painful decision that we should use it. Perhaps Goo will be seized soon enough. I have hope.)

My co-dev presented me with a version that used a lat lng center and a zoom level, which is what most start with. In the Goo Maps client-side JS API, though, months ago, I found

this.bounds = new google.maps.LatLngBounds();
this.bounds.extend(position [object, with lat lng]); // iterated for every location
this.map.fitBounds(this.bounds);

I found that for 5,000 locations, it's plenty fast enough to run each of them through, and the boundaries will be expanded as needed. It's essentially an auto-zoom given the points on the map.

I'm assuming that the React / Next version uses the same underlying Goo library, but I have the impression that they don't advertise that very well. Thus, when I mentioned the Map object and bounds to my co-dev, he didn't know what I meant. Again, no fault of his. One of my big objections is that React / Next re-define things in a way that hides the underlying code. (WordPest and Drupal and Magento and Angular do the same thing.)

My co-dev and I will have to go back around and figure out whether there is a solution to the Map problem. The client-side JS object must be there somewhere. I went looking for it briefly, but a quick search didn't find it. Thus, I figured out how to integrate Next (React / Nodejs) with OG client-side / web-based JS.

Some of this I posted in general form on 6/3. Now I'll try to be specific and add more recent code. I do not use the Location Apache directive anymore. Here are the Apache config mods:


# in another conf file in the real implementation
# live versus dev name
<IfFile			 /var/kwynn/i_am_Kwynn_local_dev_2024_01> 
    Define Gsn dev.example.com
# Define Gsn blah
</IfFile>
<IfFile			!'/var/kwynn/i_am_Kwynn_local_dev_2024_01'> 
    Define Gsn live.example.com
</IfFile>


# ServerName is in a 2nd other file in the real implementation
ServerName ${Gsn}

ProxyPass	 /nodejs http://${Gsn}:3000/
ProxyPassReverse /nodejs http://${Gsn}:3000/

ProxyPreserveHost On
ProxyPassMatch ^/(_next/webpack-hmr)$   ws://localhost:3000/$1
ProxyPassMatch ^/(_next/static/.*)$ http://localhost:3000/$1


At the HTML level, I invoke Node in an iframe:

<iframe src = '/nodejs' style='width: 21vw; height: 98vh; ' id='iframeNode'></iframe>

The following is the JavaScript both in the browser and in Node. It probably isn't perfect because I'm obfuscating some things. If you want it perfect, you can hire me.

Note that in the real code there are some promises and "on DOM load events" that keep things in the right order.

The Node side consists of checkboxes and "clear" and "select all." The checkboxes define different types of customers. This sends arrays of customers to the Map to display them. The "rebound" global is needed for the case where I clear the points / customers / markers on the Goo Map side and then call Node to set the checkboxes to unchecked. Given the way this is all set up, this would cause the Node side to call Maps yet again. It's an unnecessary call that would also mess up the data, so I have a variable to ignore it. The variable is defined in a different file from one of its usages. The order of definitions is protected by "on DOM load" and other promises.

/* receive from Node into the parent HTML window with the Google Map */ 
function rcvCustSelectionFromNodeJS(customers) { 
    if (GLIgnoreSelRebound) { 
	GLIgnoreSelRebound = false;
	return;
    }
    /* the customers are shown on the map */ 
}

/* send to node */
var GLIgnoreSelRebound;

function clearCustomersFromNodeSelection() {
    const e = document.getElementById('iframeNode');

	    if (e && e.NodeclearCustomers) {
		     e.NodeclearCustomers();	    

    		GLIgnoreSelRebound = true;
	    }
}

/* the Node side */
// top of file - send selection to Goo Map in parent HTML window
'use client'

import React, {useState, useEffect} from 'react'
export default function setCustomerSelection({ filteredCustomers ...

useEffect(() => {
        if (window.parent.rcvCustSelectionFromNodeJS)
            window.parent.rcvCustSelectionFromNodeJS(filteredCustomers);
    }, [filteredCustomers])
}

// another node file that clears the selection

'use client'

import React, {useState, useEffect} from 'react'
import Checkbox from './Checkbox'

export default function FilterCustomers({ ...

    const clearCustomers = () => {
        setCustomers([]); // original Node
    }
    
    // the window HTML / JS DOM object may not be defined yet, so check
    if (typeof window === 'object' && !window.NodeclearCustomers)  // make available to parent 
                                       window.NodeclearCustomers = clearCustomers;


continuing the rant

So, the good news is that I appear to have a system to get React and OG client-side JS talking to each other. Apparently it is React because the variables say so, but Next is involved, too. (I'm not as sure that Next needs to be there anymore.)

The bad news is that the Next server takes up enough RAM that I had to create a swap space. Apache, PHP, and MongoDB don't generally need swap space, but MariaDB and Next / React do. The next-server is taking 54GB of virtual RAM. In short, this is a case where I may have 1 million lines of code running where a few dozen would suffice. The "general store," as usual, does not seem worthwhile.

June 6

Ubuntu 24.04 upgrade notes, part 2 (posting after 04:32)

  • Upgrade MongoDB
  • remember to purge MoDB previous install, and other directions in MoDB's doc
  • set IPv6 to a lower TTL
  • Disable php8.2 and enabled 8.3.
  • Re-install nano time PHP extension
  • create a new SSL cert for testing; don't forget to switch it back

June 3, 2024

Looks like at least 3 entries today, and the day is still young from my POV. The first entries were around 5:30am. The 3rd is after 19:21.

Ubuntu 24.04 upgrade notes

Ubuntu 24.04 comes with PHP 8.3 over 8.2. Thus, disable one in Apache and enable the other. Then, I have to re-install my nano time PHP extension. That gets me farther at 19:24.

a slick bit of tech

The project I'm talking about will probably end in less than 2 months, so I'm nowhere near out of the woods. Similarly, I'm not being paid very much. I'm being paid a flat rate. Right now I consider my $ / hour to be decent, but I'm also going to try to keep working hard at this.

With that note, the roosters were out there crowing as of a few minutes ago (5:25am), so I'll join them.

First, a note to self that the proxy and proxy_http Apache HTTP server mods are needed. Apparently proxy_html is not, so I'll try to turn that off.

Also, you need a swap space (or perhaps a lot of RAM) to run even a "tiny" React / npm / Next / nodejs server. On a 1G RAM t3a.micro instance, I'm pretty sure npm was crashing without swap. I set the swap to 1GB.

As for crowing, I just accomplished a rather slick piece of tech. How does one get "traditional" client-side JS talking to a React project's almost entirely client-side JS? Put another way, how does one get traditional JS served from Apache to work with Node?

The elements were

  • Serve primarily from Apache as before. That is, serve the index.html from Apache. (It's a .html rather than .php because I have all PHP stuff turned off for the moment, trying to coordinate with the front-end dev.)
  • Use an iframe to serve from Node on port 3000
  • iframes can talk to the parent window and vice versa. I should have figured that out about 2 years ago, but I started trying something really silly involving talking back and forth with the server. It turns out it's painless.
  • In the useEffects or whatever the heck they are called in React / Next are client-side JS ('use client' may be needed), add a bit of code to intercept those changes and communicate them to the parent window. [slight change for clarity 06/10]
  • Use the Apache Location directive (I think. I'm not looking back at this yet.) and some Proxy and reverse proxy commands to serve up _next/blahblahblah files. They get "confused" between ports, as did I. Use similar directives to alias /callNode to port 3000. (It is NOT an "Alias" directive.)
  • As mentioned, enable the appropriate Apache mods.
  • And create a swap file

I may be missing steps, but that covers a lot of it. The hardest part was zooming in on using Location. I was trying Rewrite rules that were not working.

Maybe one day I'll record the details.

part 2 - fussing about the usual "general store" issue

This will probably remain a stub for now, but I hope to one day fuss about React just like I have Drupal and Angular and WordPress and such ilk. They may seem like very different tech, but there are commonalities.

I am going to try to learn React while I'm at it, but, so far, my suspicions are proving correct.

May 21, 2024

A few months ago a potential apprentice was looking at Craig's List ads. We only spent a few minutes at it, so I never quite explained my objection to his methodology.

I need help narrowing down to specific ads. He was creating yet more lists of ads, which is what Craig's List already is. He was sending me the results of keyword searches. The results were lists of ads. When he did send me specific ads, he was getting the idea.

There either zero keywords or hundreds of keywords that will help me, where 0 and hundreds amounts to the same problem: a keyword search won't work. In his case, I would have to go through his list and then I would still have to go through the individual cities to find what he missed. He was adding to my workload, not reducing it.

We probably would have gotten that sorted, but his "self-discipline" comment did not go well with me.

May 7

SQL v noSQL

intro

I'm responding to someone specific, but I might as well post it.

When your friend says "textbook relational data," I disagree. In your case, it works just as well either way. For reasons I explain, my preference is for MongoDB. It's a strong preference in the sense that I have not rethought using MongoDB since 2017. However, if I were brought onto a project with well-written SQL and well-written data access objects (DAO) and whatnot, I might not make any suggestion to change it.

main section - written 5/6 but not posted that day

Your data tree came through intact. Everything you suggested looked fine to do either one of (no)SQL. Your ideas for noSQL were fine. I have some long-term suggestions below, but they are indeed long-term. As we've discussed a few times over the years, the most important thing is to get something working however you can.

I started learning SQL in 1997, but I'm heavily weighted towards MongoDB since 2017. As I add to the law office project, all the additions are in MongoDB. I have not added a single field to MariaDB (MySQL) in years.

I'll explain why I started using MongoDB, but there is nothing wrong with RDBs (relational). My very vague notion from my job recruiter feed is that SQL is still in more job descriptions than noSQL. You can probably determine the ratios quickly enough.

There are probably cases where SQL is decidedly better, but the notion is vague to me. That is, I have no great examples in 8 years of part-time work. I give a quasi-theoretical case further below. Also, in RDBs' favor, the SQL SELECT syntax is very elegant. I am still not fluent in reporting-type queries in BJON (MongoDB's query syntax). I have some temptation to buy (or write) the various tools that convert SQL to BJSON. By now, the free ones are likely getting better, but it's my job to learn BJSON.

One of the reasons I moved to Mongo is Drupal's relational "field collection." The problem isn't inherent to a relational DB. My commentary on that has been something to effect that a field collection is a perverse parody of 6th normal form. That's sarcasm. I haven't looked up what 6th means. I was taught 3rd normal form--"the key, the whole key, and nothing but the key."

In other words, for a year or two before I found MongoDB, my nose was rubbed in how NOT to do relational. Again, it's not an RDB's fault; that's just Drupal's nonsense.

The slightly bigger reason I started using Mongo is because in MongoDB you don't have to create a table or field in advance. Mongo does not care whether your keys or values (rows and columns) are consistent. In theory, this can lead to trouble. It's an argument for routing all inserts and updates for one collection through one data access object (DAO), likely a DAO that you write.

With that said, the only time I've been badly bitten was when I got lax and confused "6" (string) with 6 (integer). Thus, I (almost) always put my inserts and updates through a function called "vid()" or "validID()" or "vidord() - valid ID or die." That is, "6" or 6 go into the function, and either a positive integer comes out or an exception is thrown.

Regarding speed (performance), once again, Drupal showed how to do it wrong. To this day there is a calculation that grinds on for about 1 minute at 100% CPU. There is a decent chance I'll have that down to 100ms in the near future.

The problem with the field collection was that it assumed you'd be interacting with a form and nothing else. Woe be unto you if you need to process the data in another way, such as the simple task of finding which hours have not yet been billed. I'm joining way too many tables.

MongoDB has always served me well in terms of speed. With that said, there have only been a handful of times in MongoDB that my data has gotten into the millions of rows / documents.

Off hand, I can only think of one time I was really polishing speed in terms of millions of rows. It's somewhere in my public GitHub repos, but I'm not going digging for it now. It's when I was processing millions of lines of web server access logs, although the good stuff might be in another repo.

In that case, the problems are identical whether it's (no)SQL. For example, insertMany() is much faster in either system. That is, it's "insertMany()" in MongoDB. In SQL it's one INSERT statement with an array of values. Well, it's not called an array. That's stretching my memory. That point is that there is a direct equivalent.

There is another situation in MongoDB in which you can get into trouble more easily than RDB. I'm not sure I can remember the exact scenarios, but I can get close. Say you have a court date associated with a case ("State versus Bob"). And then say you're concerned with whether a case is public defense or private. In SQL, the join is simple, and you quickly isolate dates for public defense cases.

There are probably solutions in Mongo, but I suspect they are visually ugly. Again, a SQL SELECT is quite elegant. I haven't had a situation where I really needed to whittle that down. With a bit of thought, you can isolate what you want quickly enough in MongoDB.

Similarly, there are reporting scenarios, as opposed to transactional scenarios, where SQL is likely easier and probably faster.

I have one screen in the lawyer system backed by MongoDB that stutters for 120ms on my local machine and 330ms on the live system. That's too slow for my taste. There have been other fish to fry, though, so I haven't fixed it. I don't think this is noSQL's "fault," but I don't know. In any event, I'll try to remember to report on the solution, although it is probably months away, only because that project is such a small fraction of full time.

answering your questions more directly

Hopefully the above was helpful. Now I'll try to answer your questions directly.

In your simple scenario, there is no difference between (no)SQL in that you're searching on the "Posts" table / collection. Obviously the exact syntax is a bit different, but I don't see any conceptual difference. No, I don't see a performance benefit either way. Remember that you can create indexes in both systems. You'd want an index on post.uid.

regarding IDs

Sorry to diverge from your questions again, but I just thought of a debate in my head that's been intermittently going on for about 24 years, and that led to another thought.

I've been debating whether IDs should be positive integers in cases where there is no compelling reason to be anything else. I've come back around to yes. Integers almost certainly compare faster in the CPU logic core faster because that's pretty much the basic operation of a CPU. You might try a string 9 bytes or bigger versus integers and compare the difference with an index. You may need to auto-generate 100,000s of rows to come to a conclusion.

With that said, I sometimes have both a human-readable _id and then a positive integer ID with a unique index on it. Sorting the index backwards (-1) may be useful; in the case (situation) of the lawyer, he's usually working with the most recent or highest numbered (legal) cases. For the human-readable ID, I either have a human-readable date plus some data guaranteed to remain unique, or I may have something like "Bob_versus_State_caseID_12345."

I really like my human readable IDs. For my default date code, see my GitHub and the get_oids() function.

Note that if you use "Bob_versus_State," you have to be careful to not try to change the _id in the case of an update / upsert. That's one reason to use an integer in addition to a human _id. That is, if you do it the way I'm suggesting, you make sure _id isn't in the data payload (upsert) for an existing document / row. This can get awkward in the case where you don't know if it's a new document or not. I can elaborate if you care.

back to your questions

As I hope I've said, I don't have a super-strong opinion as to when to use each. So far, I haven't seen any reason to go back to relational. If you're writing a zillion reports and you itch to use SQL, there may be an argument to create a relation data warehouse from the transactional MongoDB data, and then report off of the warehouse.

That is likely the best I can do for round one. The long-term thoughts are below.

images - long-term ideas

I tend to base64 encode binary data (such as images) if I'm going to put it in a database. There is something to be said for being able to read a raw .bson file. That is, if you have raw binary, the output in the "more" or "cat" or even mongosh command looks very weird. This also brings up a question of what does an image look like in MongoDB Compass or Robo3T (deprecated) or an equivalent of MySQL Workbench? That is, a visual / GUI / pretty query tool.

I can't imagine the encoding and decoding makes much speed difference in 2024's hardware. Base64 is not a strong opinion, just something to think about.

In the long-term, you might not want images in the database at all. There is a question whether any database is suited for large binary data. I'm sure you can find lots of opinions out there. In the short term, as always, do whatever works the easiest, which is putting images in the DB.

I'll give you an example of a situation that makes my teeth hurt. It's not perfectly equivalent, but it's close. In the lawyer project, my client writes snail letters to his clients. I save the LibreOffice Writer file of the letter. I am almost certain it's binary because the format is a ZIP-encoded version of several XML files.

I often backup and restore his database from the live system to my local machine. Every other table (collection) is eye-blink fast, but both saving and reading the letters takes quite a few seconds. There can't be that many letters (5,000 at most), and they are not that big (20k, 100k at most). Perhaps I'll look it up. The point being that it's the one item that bogs my backups down. Also, in this case, the letters get less important over time. That is, looking at a letter from 1 year ago is unlikely and 5 years ago is less likely. One day I plan to do something different.

April 28

more Craig's List madness

On one hand, I am not one to complain about length of writing. I damn well am going to complain when 15 years and 3,000 - 4,000 responses have resulted in not much. This guy's ad is 1,635 words and 8,748 bytes. This was my intended response:

I appreciate a thorough explanation of what you want. Fifteen years ago I would have been thrilled to see such an ad. I would have diligently read it. I may still diligently read it, but this is something like my 4,000th response in 15 years. I have learned the hard way that no matter what I do, response rates are low. I should probably give up on freelancing, but the alternatives have their own problems. The first reason I freelance is because this will be my 5th email of the "day." I am quite a night owl.

I've been doing web development part time for over 20 years. It's been part time because I can't quite solve the aforementioned problems. I may be quite content to get $20 / hour, and I'll probably be faster than many applicants.

I got to roughly point 13 and then skimmed a bit. It seems you want to do certain steps one at a time. That would be great if I'm paid after each step. I like the idea of breaking projects into small parts and payments.

That's the best I can do for first contact. It's likely about the best you'll get. I suppose I'll find out eventually, or not.

Kwynn

I was ready to send that. I should have suspected the following; it's happened before. I went to look for an email address, and no email address. Just a phone number. His ad also has a DARPA Lifelog ("Facebook") link. The guy was a Marine. Perhaps that explains a few things. I still refuse to read the whole thing until I know this is for real. I'm wondering if someone on Craig's List is playing a prank on him.

I caught a few more pieces of it, and now I think he's saying he won't pay a damn thing until the whole project is done. Ok, maybe, but he'd better put the money in escrow.

I decided it was too late to ping him, and it may not be appropriate on Sunday. That's never clear on Craig's List. I'm not sure if I'll ever contact him or read his whole ad. Perhaps I'll read it when I'm in the sun with my bare feet on the ground.

April 23

content distribution

As has become the case often lately, this gets just beyond the stub stage. This entry was starting to be a job application, starting on the 20th, but the job already went to someone local-to-them. I'm relatively sure that reason is the truth. I'd decided before I got that word that it wasn't worth that much time. The idea is worth some time, but I can only do so much at once.

I wrote about this years ago on another page (within this site. I'll link to it eventually [or not].) I apparently didn't do well enough in my description, though, so I'll try again and eventually update that page.

Today's immediate context is art "discovery," which is almost identical to art distribution. In many contexts, preventing copying is a lost cause, so it seems an artist should want both maximum distribution and an NFT contract that allows a purchase after possession. (The term "NFT" was not in the ad I was reading, but it was implicit.)

In this entry, I'm both responding to an ad and sketching out my larger idea. My original context, years ago, was distributing political information: videos, memes, etc. The idea (that I'll sketch out below) works for many contexts, though. More specifically, I want to solve the censorship problem.

My idea makes the assumption that the creator wants maximum distribution. Part of my idea goes to the concept of a website versus a protocol / API. If I want to provide max distribution, I should be providing a protocol / API first and any websites are secondary. The website may be build upon the protocol and API, but it's secondary. Similarly, I should make it easy for others to put content on their own sites using their own storage for redundancy and faster downloads. If the protocol / API gets sophisticated enough, downloads are faster both because they can route based on distance and my proposed system can use file sharing tech to pull data from many locations at once.

April 16

one of the funnier bits of doc I've seen in a while

I have vague memories of seeing some good humor in documentation, but I don't remember them off hand. I literally LOL'ed when I saw this:

Raising an issue:

We like issues! If you are having trouble with iocage please open a GitHub issue and we will run around with our hair on fire look into it.

‐‐ iocage README.md

April 14

Looks like at least 2 entries today. The Oracle one is the earlier one and further below.

more fussing over Craig's List (posted after 23:23)

Yes, yes. There is an obvious question about insanity and Craig's List: doing the same thing over and over and expecting different results. Reddit shows signs of being a much better option, but I haven't dug into the question of how to play the MMORPG and get enough "karma" points to reply. Besides, I assume that Reddit is the sort of place where I'd get kicked \off in 5 minutes if I were honest. That is, if I tried to find a sub-Reddit of interest, I'd likely be kicked off the whole platform. It's one of several instances where it would be useful to work with someone who isn't quite as full of foam in the mouth as I am. It occurs to me that I need interfaces to the Satanic world, and I try very hard only to condemn people in very specific instances. Obviously Oracle employees grate on me, but I'd still work with them on projects unrelated to Oracle.

Several weeks ago one of my former apprentices found an ad with a real email address (as opposed to Craig's List relay). Also, both he and I separately found the guy's name and website. That's a promising ad, and I can and probably will start hounding him if I don't get a reply. For many years I just can't believe how incompetent I've been at sales. I can't believe that I can remain this bad for this long. I get so close on CL that something must work some day.

On one hand, my tolerance of sales seemed a bit better this evening. Then it started getting worse on only the second reply, to an ad with the usual CL relay address. They want a phone number, but they don't post one. Why would I want to give one for exactly the same reason they don't? Also, they were trying to post a video to Big Evil Goo Tube, so they want a public video. They were having formatting problems. It would have been lovely if they posted the raw file to Goo Drive or some such and let us all look at it. I guess that wouldn't occur to everyone, but there are zillions of examples of this. That is, situations where the information involved can be public, so even if you don't know how to post it, at least make that suggestion in your ad. If what you want to post will be public, why the hell is my phone number relevant until I deem it so? My phone number may be relevant to getting paid, for example.

There is the possible question of someone running off with their video and posting it as their own, but that seems improbable. Also, anyone can download any Goo Tube video and post it as their own. Duh. My point is that there isn't much of a question of trust.

Even if there is a question of trust, I've spent a zillion hours on this website. Some people may not like my content and not want to work with me, and that's fine. I'd better show that I'm a real person, though. Connecting my reply with this site is a reasonable request, of course, and there are lots of ways to do that. Although my writing style is so rare (for good or ill) that my site should match any email of mine. Also, I have a worldwide unique name. The thought of Goo'ing myself is horrifying, so I'm not sure, but I doubt that there is much about me on the web. It seems that if I were stealing people's videos and such, they would know that.

In 2024, it seems that many people who reply to a CL ad should have some sort of web presence. Thus, why is anyone's phone number relevant at the start? What is the there to talk about in this case. You want to post a public video, so show us the videos. Again, this may not occur to them to post the raw video, but I've seen this sort of thing before in cases where it should occur to them at least part of the time. The ratio of such is very low, though.

former Oracle DBA Barry Young of New Zealand - a hero in the hands of "those who have hanged heroes" (posted 9:34pm)

I just had my nose rubbed in the cooings of a happy Oracle employee. On June 14, 2023, I had a few words about Oracle. A few months later, in November (or perhaps a bit before), an Oracle DBA named Barry Young in New Zealand finally released the data Steve Kirsch had been looking for: "It's finally here: record-level data showing vaccine timing and death date. There is no confusion any longer: the vaccines are unsafe and have killed, on average, around 1 person per 1,000 doses" (Kirsch on November 30, 2023).

(I haven't looked, but I have no reason to believe that Young worked for Oracle.) Last I checked, Young is facing 7 years in prison. The only person associated with Oracle products who is doing something useful is, of course, being made out as the bad guy.

I had some trouble finding Young's name from Big Evil Goo, of course. He came right up on Yandex. Go Russia! Yandex is Russian, but its English search actually does search rather than Satan worship.

At the very beginning of Braveheart, Robert the Bruce is narrating. He says something to the effect of "English historians will call me a liar, but history is written by those who have hanged heroes."

April 2

First posted 01:50.

I'm not feeling so enthused about updates, but I suppose I do have a very small audience, and there are updates.

I finally upgraded kwynn.com to Ubuntu 23.10. One lesson is that one should beware of setting the TTL of AAAA records too high, at least on AWS EC2. I had them set at 24 hours. I'm trying to be nice to both my registrar and the DNSystem as a whole. AWS EC2 is set nicely to change instances of IPv4 addresses--that's the Elastic IPs. You can set an Elastic IP to any instance you want. You can reassign v6 addresses, too, but, as far as I know, you have to explicitly release them from the old server (instance). It would have been easier to just change it, but instead I took my site down for longer than I wanted to reassign the v6 address.

On a related point, be careful of having more than one subnet (unless you have a reason, of course). I had a second one for no good reason, and I wasn't giving thought to which one to select. That's why I couldn't switch IPv6 easily--I had the wrong subnet. Thus, I deleted the second one.

I think it was 23.10 that upgraded to PHP 8.2. (I upgraded my local computer months before kwynn.com.) In any event, I got hammered by "Deprecated function: Creation of dynamic property className::$varName is deprecated..." I wasn't prepared for that sort of workload when I upgraded Ubuntu. I would assume there is still stuff broken on my site. I went over the stuff I use; that's the best I can / will do for now.

Years ago I heard that AWS us-east-1's original AZs were becoming old and decrepit. I've been on -1a for a while, and, with one possible exception, it works fine. It seems too often that the network connection gets hung up for minutes upon (re)boot. I don't have enough data to correlate that to an AZ, though. It's just a thought.

So, when I create a new instance, a new AZ is one consideration. Then there is the warning flag, "EC2 recommends setting IMDSv2 to required." I wondered what the heck that was. It's the http://169.254.169.254/ service within an instance--Instance MetaData Service. It lets programs figure out which AZ they are in, and what type of instance it is, and such. Around 24 hours ago I upgraded my metric tracking code to IMDSv2. I'd have to move instances, though, to turn v2-only on. At least, I'm pretty sure I have to move instances. Actually, no; one can do it on the fly, but my first glance shows it's too complicated to bother with.

AWS' discussion of v2 is interesting. They set the TTL of the result to 1 hop, so that they data can't leave the instance. Also, they observed that most proxies and firewalls and such don't forward HTTP PUT, so they use that.

In somewhat unrelated news, I got Discourse running for a client (a few days ago) on EC2. That went smoothly.

In unrelated news, I notice that the latest HTML validator complains "Trailing slash on void elements has no effect and interacts badly with unquoted attribute values." I suppose I should deal with that. I may do so now, in fact.

March 13

False alarm / false update.

March 11 - thoughts on cryptocurrencies

note

This is quick and dirty, but it's all I have time for. Hopefully it's useful.

I wish I didn't have time because I was making lots of money, even fake "money" as defined below. It's more like I have just enough work to not quite feed and house me. I want to finish this work early in the month so I can seek new projects in a week or so.

an attempt at a conclusion

I wrote the conclusion last, but I'll post it first.

As a system of currency, it's hard to get any worse than the banks. That's always been a Satanic operation. Crypto-powered AlphaBay starts getting the world back to how it was in 1910, before (((they))) started making more money and causing much more damage by making "drugs" "illegal."

So far, almost if not all of the cryptos have been less stable than the FRN, which is rather pathetic. The FRN will always have a downward trend, though, and the cryptos have a potential upward trend indefinitely.

Holding crypto is speculative but probably a good idea at a low percentage of investments. Mining or staking crypto may be fine if you can start with small amounts, start getting returns, and don't have a large egg in the basket.

If we made a stable cryptocurrency, we could change the world.

This begins the original post:

I've been asked about crypto at least twice in the last few weeks, so I'll try to answer usefully.

First, as an abstract concept, independent of specific tokens / coins, crypto has the potential to be a true revolution along the lines of the industrial revolution. Crypto has proven that a large number of people will accept a currency(ish) outside of governments, banks, or other "official" institutions. I don't quite consider crypto to be true currencies, because one definition of a proper currency is a unit that maintains stable purchasing power for years, decades, or centuries. The cryptos spectacularly fail at stability.

There were supposed to be "stable" tokens, but my thumbnail understanding is that a handful of those totally failed recently--they became so unstable that they went to permanent zero. Also, at least one was based on the Federal Reserve Note (FRN - fake "dollars"), which is quite a joke.

To back up and emphasize that point: the green bills and stuff in bank accounts are not legal dollars. They are falsely denominated in dollars. The US Mint still issues a true one dollar, one silver (Troy) ounce, 2024-stamped coin. I link to the proof--reflective, polished, for collectors--version. They also mint many more non-proof coins that are theoretically meant for general circulation. You generally can't buy those directly from the Mint because they only sell in bulk. Last I checked (many years ago), you could buy those coins for 1 - 5 Federal Reserve Notes (FRN) above the spot price of silver. That is, right now a true dollar will cost you perhaps 28 FRN with a spot price right now of 24.73 FRN / Troy ounce.

The Mint also issues 1 (Troy) ounce gold coins that are stamped 50 dollars. Hopefully they didn't discontinue the American Eagles in 2024. I can find a 2023 gold coin, though. Interestingly, that one is uncirculated rather than proof. I didn't know they sold those directly, although I haven't looked in years or even decades. In any event, I'm sure I could get a much better price, but the spot of gold is 2,190 FRN right now. I would likely have to pay 2300 FRN for a 50 dollar coin, so that's a 46:1 ratio rather than > 25:1 for silver.

So, how do the banks get away with denominating their account money in dollars? I damn sure can't get gold or silver at the legal ratio. That's a rhetorical question, of course. The banks are at the core of the Satanists / J--ish "elite" / Illuminati system.

Worse yet, 99.99999% of all fake "dollars" are literally created by a bank when a debt is incurred. Every (99.999999%) unit of the so-called national currency is lent at interest, which means the whole "national" currency is borrowed, which means that the currency can never be repaid because more debt exists than units to pay it. The 2018 Swiss sovereign-money initiative (Vollgeld) tried to address this issue, shockingly. Meanwhile, Quiet Weapons for Silent Wars blithely admits this is "slavery and genocide" (page 4), "counterfeiting" (14), and "... presented as ... 'currency,' [that] has the appearance of capital, but is in effect negative capital. Hence, it has the appearance of service, but is in fact, indebtedness or debt" (10).

So, on one hand, cryptos are a true representation of value as opposed to Rothschild's debt-slave tokens. Cryptos are created by intense computation (mining, like Bitcoin -- proof of work) or lower-power computing backed by proof of stake (like Cardano Ada, at least in the summer of 2021). Similarly, it's much harder to turn someone's account off. My understanding is that during the Canadian Trucker's Convoy (early 2022), the Satanic occupying power ("government") targeted Coinbase and other large exchanges. That only works if one kept one's wallet with an exchange. A crypto holder is free to keep their own security tokens / wallet.

I should also address the notion that crypto is used for evil purposes. What a joke. The banks have been deeply involved in the "illegal" drug trade since banks existed. The same people who own banks (Rothschilds, etc.) run the drug trade, sex trafficking trade, etc. They are the ones who decide what's illegal. Cocaine and heroin were over the counter products in America in 1900. Duh. JP Morgan has been successfully sued by Jeffrey Epstein victims in the last few months, recently for 290M FRN.

Bayer marketed diacetylmorphine as an over-the-counter drug under the trademark name Heroin.... From 1898 through to 1910, diamorphine was marketed under the trademark name Heroin as a non-addictive morphine substitute and cough suppressant.... In the US, the Harrison Narcotics Tax Act was passed in 1914 to control the sale and distribution of diacetylmorphine and other opioids...
-- Heroin, WikiP, unchanged as of Mon, 11 Mar 2024 23:32:35 GMT, captured around 21:15 EDT.
By 1885 the U.S. manufacturer Parke-Davis sold coca-leaf cigarettes and cheroots, a cocaine inhalant, a Coca Cordial, cocaine crystals, and cocaine solution for intravenous injection.
-- similar to above, apparently unchanged since Mon, 11 Mar 2024 13:46:30 GMT, captured ~3 minutes after the above

Ronald Bernard talked about banks. At a quick glance, he's talking about them at 17 minutes. At one point in that interview, he talks about how his mission was to launder tractor-trailer loads of FRN green bills in the basement of a large European bank. That was for Iraqi oil during sanctions, but the same applies to actual crimes, as opposed to the tyranny of the decade that the US Inc. makes up.

On a similar note, I spent perhaps 10 minutes browsing AlphaBay, a dark web site. Every "illegal" drug under the sun was available. I just laughed and laughed. Why I thought it was funny is another topic.

Wall Street Journal: "Instagram Connects Vast Pedophile Network" by Jeff Horwitz and Katherine Blunt. June 7, 2023 7:05 am ET. Another example of large companies being far worse than anything crypto is accused of. I haven't read that specific article. I assume it's a limited hangout, and the truth is far worse.

an attempt to get to the point--investments, mining, the future, etc.

The popularity of crypto has proven that people will accept non-"official" currencies. As far as I know, though, no one has created a stable crypto, or even proposed one that makes sense. Basing "stability" on the FRN is laughable.

A stable crypto or a family of them could be a true revolution. We would be able to issue currency as the users' economy grew and create true wealth as opposed to a debt that someone has to pay. In For Us, the Living: A Comedy of Customs, Robert Heinlein sketched a currency system that would make it unnecessary to work for necessities. I am reasonably sure that his system is feasible. It's hard to comprehend how much damage the various Satanic tentacles do. That is, peace on earth is fairly simple if you counter a handful of basic Satanic schemes.

I would love to work on that project, and I have the tech capability to do it. I would need the right barter or a small number of FRNs per month.

As for the existing cryptos, I would treat them as speculative investments that you can afford to lose. The supply of Bitcoin grows very slowly, and I believe it's programmed to stop growing. Meanwhile, demand has obviously been all over the place for years. I tend to think it's a good long-term investment because its growth will be very slow. It's speculative because it's already been very volatile, and it's still a new system with some potential unknowns.

Ether should also be solid because Solidity is part of it. :) People were starting to move towards Solana, I think it was called, because Ether can get expensive relative to what you're trying to accomplish.

As for mining or proof of stake, the numbers on Cardano Ada were fairly easy to run, although I didn't run them when I was working on that project. Ada was (and probably still is) proof of stake. You earn Ada by running a node and "staking" Ada to that node. A node is the equivalent of a Bitcoin mining rig, although a node didn't need specialized or powerful hardware. A node is a program running on a server that takes turns with other nodes creating blocks and thus new Ada. Staking is essentially a vote of confidence in your node. You can "stake" your own Ada against your node or get others to do so. Staking is not custodial and doesn't keep anyone from spending their Ada. When a stakeholder sells Ada, the stake goes to whomever the new Adas' owner is staked to (if anyone). As best I understood the algorithm, if you had 1% of all Ada staked to you, you would create a block, and thus create Ada, every 100 seconds.

Running the system was, at the time, about $70 a month on AWS. For that purpose, AWS was probably a bit expensive. I could probably have come up with cheaper alternatives. A few people published their earnings, and some people were making some money (Ada).

I can't imagine mining Bitcoin is feasible these days. My understanding of that algorithm is that you are buying lottery tickets with GPU power. The total number of tickets must be astronomical these days. There is a drawing every 10 minutes, but the chance of winning must be vanishingly small. I'm sure someone has run those numbers.

Also, I heard a story recently that is a cautionary tale. I have high confidence that the story is true, at least in the end result. Someone was good money with one of the oldest, most popular cryptos (I think it was one of the proof-of-work systems). They were reinvesting, though, and not taking any of their profits. Also, they apparently didn't have direct access to their wallets. At some point, the people running the data center said that all the money had been lost and gave some excuse as to how. The excuse was likely a lie, and the data center operators likely stole the money.

March 8 (started 4th)

This is more on moving computers.

  • install the HTML validator - first sudo apt install default-jdk then, after after cloning, python ./checker.py all
  • The way I have it set up, the validator needs the proxy and proxy_http apache mods
  • Here is how I set up the validator in Apache config to run at /htval
  • the validator involves systemctl if you want it to start at reboot. That URL might change, so here is a specific version of nu / validator systemctl.
  • replace "nobody" user with DynamicUser=yes # done in repo above
  • I have been and will be adding to my sysadmin examples repo
  • I will have to activate the Apache rewrite engine: sudo a2enmod rewrite ; sudo systemctl restart apache2 -- note restart not reload
  • sudo systemctl status apache2 | fold -- the fold command wraps lines readably
  • sudo a2enmod ssl and restart
  • put locally run, SSL-enabled domain name in /etc/hosts : add the line : 127.0.0.1 example.com
  • set permissions of Apache DOCUMENT_ROOT aptly
  • it would appear I have symbolic links in kwynn dot com /t/23/02. That may not be the best way to do it, but I will probably stick with it for now. The specific links are below. Need to change underlying permissions, too.
    1. homecms points to my CMS project / home (explicit /home)
    2. htva points to cms/htva
    3. webhook_git points to my GitHub webhook project -- I rather doubt this still works, but that's another issue
  • That in turn leads to issues with isKwGoo and the positive email [Big Evil Goo Mail] check. See "notes" below.
  • looks like kwynn dot com /index.php and the cms index.php should be a hard link
  • need to install GMail / Goo composer. I keep these in /opt/composer as is shown in kwutils (I refer to the whole repo as kwutils because of the one file with that name):
    composer require google/apiclient
    composer require google/apiclient-services
    composer require google/auth
  • positive email check read-only secret file - set permission - That's the Goo Cloud Console API key stuff.
  • isKwGoo email hash file
  • looks like I have yet more symlinks in my website to resolve, or just use the actual repo:

notes

My "positive GMail check" system was originally meant to be darn sure whether I have unread emails or not. I am an embarrassing number of Android versions behind, but I doubt that situation has improved. I have found that one can swipe GMail's inbox all one wants, but you can't be sure if you're getting new emails or not. My checker solves that problem quite nicely, although it's pull and not push. (I suppose I could fairly easily adapt it to push.)

I generalized that code to do Goo OAUTH(2) / "Login with Big Evil Goo[gle]." I've been using that with my lawyer project for months now. "isKwGoo()" is a function I use for my own site to indicate whether the admin (me) is logged in. The function is in my kwutils / general PHP code, though, not the postive email repo. In any event, I created a hash of my non-public email address that should be considered the admin. isKwGoo() then checks if the cookie logged in has successfully checked that email address. If so, I'm admin.

February 19 (not posted until probably the 29th)

Some thoughts on moving computers:

  • GitHub keys
  • get Apache config in repo
  • create necessary (perhaps obfuscated) paths / sym links - such as /var/kwynn/...
  • install chrony - After being asleep for something like 9 days, the new computer is 20 SECONDS fast. That won't do.
  • put my chrony.conf in public repo and swap it for the default
  • move selected passwords - GitHub, client stuff, etc.

Feb 5

null update

The page is marked as changed because I started an update and didn't get anywhere with it.

January 15

yet another attempt to promote my services

background

This will be for various Gab.com groups.

ad (v1 ca 17:20, v2 17:22+)

I offer a wide range of software / web / mobile services, perhaps for much less than you'd think. Lately I revived 3 websites; I did that for $100 - $220 each. I built a law office web application that automates tasks such that a human assistant has been unnecessary for years. Getting more exotic, I wrote a USB device driver, an NTF contract, and created a Cardano Ada crypto stake pool. I started learning the SQL database language in 1997, and I still use it.

Expenses are by the day (week / month), not the hour. I'm more interested in total "dollars" (FRN / debt notes) than a rate. I'm hoping to start a project that will pay $1,000 / month for 6 months. That will probably get a lot of work out of me because, along with my one steady-but-small project, that will keep me fed and housed. Also, I will barter for food and housing, and I will move to quite a few places to do so.

To shift gears a bit, I would do just about any work during an overnight shift. I am quite the night owl. Lately I have been applying to overnight manufacturing jobs rather than daytime tech jobs.

Also, I'd rather do just about anything with / for purebloods during the day than work for the vaxxed zombie horde at night. Two months ago, there was talk of building a deck with a pureblood. Weeks later, that still sounds preferable to many other options.

more

More specifically, that's the dawn-supremacist, illiterate, Zoom-ing, shod, masked, vaxxed zombie horde.

January 2

I just posted the previous entry that I wrote on December 26 but didn't post until now. I had to install a couple of plugins and restore the theme, and then I did get paid. That client should have more work for me soon, too, but I plan to spend more time on non-technical jobs for at least a handful of months. Probably more to come on that in my personal blog soon.

I'm writing and hopefully posting now because I finally took the moment needed to track down the "general store" joke. "I went to a general store but they wouldn't let me buy anything specific." That's American comedian Steven Wright, born December 6, 1955. So the joke can't be as old as I thought it was. It probably still is decades old, but I thought it was from the 1920s - 50s. I misquote it at least once below. I had the idea perfectly right, but not the exact wording.

2023

December 26

starting 19:44

It's a Boxing Day / post-Christmas / pre-Russian-Orthodox Christmas miracle! I looked at the site sideways, and it fixed itself! Seriously, though, I logged into WordPress' admin and clicked on a few admin pages, and that activated some sort of emergency fix-it script. The site was fixed a few minutes from my "go" moment. I offered to do a bit more work for a fixed price. I'm trying to get my client's attention for more instructions and then to get paid.

November 28

some work history details

I was recently asked "When is the last time that you had steady work?" As with many questions, I don't see simple answers, because the question makes assumptions that need clarification.

One answer is that I've had one steady client for almost 8 years, but that has averaged 5 - 6 hours a week. Through '16, '17, and part of '18, it averaged 10 hours a week, so that more or less kept me afloat. Starting in 2019 it's been 5 hours. So one point is that I have one client who loves me. :) I have at least one other client who is very enthused, but he only has sporadic work. I can think of at least one other who might re-hire me, but it would be even less than sporadic. And I've had other happy clients, but they have even less work. Then there are clients who were happy with my technical work, but we had conflicts over money and such, so I just gave up on them. Given that sort of thing, I do not have the personality to freelance, but I keep trying. I elaborate a bit below.

Especially since '19 when my main project's hours were cut (due to a baby on the way), I have been trying to find more freelancing work. I come back to that in a bit, too.

Another way to answer is I haven't looked hard for steady work for years. That is, I haven't looked very hard for "real jobs." I had an incident in 2015 with a job recruiter that was so bad that it scared me away. As of the last few days, I'm finally ready to drop almost all restrictions and just find something.

Put yet another way, I've had a handful of self-imposed restrictions for the last few years, all of which I'm finally and quickly getting over. For one, I was terrified of working 9am - 5pm hours. I had specific reasons to believe it would literally kill me in a few months. I thought for many years that I'm a hard-wired night owl.

As of now, if I'm going to work a fixed schedule, I would ideally like it to be at night. However, I've been relatively active by 10am almost every day since October 2. That's not the same as getting up at 6am, but it's time to try that. 10am is much earlier than I've usually been active.

Another restriction or perhaps insanity is that I can't believe that I can continue to be this incompetent at freelancing sales. I just keep thinking that if I try a bit harder, I can get freelancing sales to work well enough. I'm partially giving up on that, too.

On a related point, I have social anxiety and a number of other problems around calling people on the phone. I've fussed about that at some length in this blog. As of the last few days, I'm getting over that, too. At around 4:45pm today, I was finding out just how unreliable email is. I had suspected this, but I was on the phone with a recruiter and was seeing how bad it is in real time.

In the last few days I've called 6 - 7 recruiters, talked to 2, and left voice messages with the rest. Obviously I need to increase those numbers, but that's a big step for me.

I also left messages with a handful of Craig's List computer gig folk today. Up until today, I just would not initiate calls to CL folk.

Last time I dropped most of my restrictions, some of which I listed above, I was starting work 11 days later. It's time to do the same: drop more restrictions and call more people. I have been told a number of times how good I am on the phone, so I should use that.

So there is the long answer. Put another way, if I just act like a relatively normal person, I can probably have steady work fairly easily. We'll see. I suspect the job recruiters stayed on vacation this week. Perhaps they see little point in recruiting at this time of year. I should learn more day by day.

August 29

telling the "call me" crowd what for (slight revision posted later)

As I say in the following email, I joke with my favorite, regular waitresses that being nice is leading me literally towards starvation (note on this below). I muse that I need to start screaming and throwing things. I proverbially did it. This is through the Craig's List relay, so I question that it will go through. I get just enough response from CL that some things obviously go through, but there have been quite a number of mysterious non-responses. I consider setting up a system to send beacon images. I have to set it up programatically because if I load it into Big Evil Goo Mail manually, it's already in Goo's image cache, and I won't see it unloaded when the recipient loads the email (whether the recipient is on Goo or not).

First, his ad. I am paraphrasing such that it's harder to find.

XYZ support on Linux [via Zoom]

compensation: $50 to get it to running on AWS

I have XYZ on my [home computer]. I need it to run on my server. I suck at configuring XYZ. Can you help?

Give me a number to call you at and a time to call you.

my response

I've gotten perhaps a dozen *types of* systems working on AWS, let alone individual instances of systems. I still have the crypto-signed emails where AWS flew me out for an interview in 2010, and roughly 13 of their recruiters contacted me last year (I definitely have those emails). Why I didn't go down that path last year is another story.

I could go on, but I'm going to address another part of your ad, regarding phone numbers and calls. This is years of frustrating coming out; this isn't specifically about you.

  • I know for a fact that some Craig's List ads are phone number harvesting operations or far worse. Why would I send a phone number in a first email?
  • You get my full name in the "From." My name is worldwide unique. I have a huge web presence by some standards. You can know 80 - 98% of what you want about me from the start.
  • Meanwhile, even if you were to respond in writing, the information that comes through the Craig's List relay is asymmetric. I don't get ANY OF your name, real email address, domain name, company name, etc. Even if you wrote back with "call me" and your phone number, I'm looking at a black hole.
  • Do you want a technical person or a performance artist / actor? Tech people are notoriously shy. Not all of us have joined the Zoom-ing, shod, masked, vaxxed zombie horde. Nothing you can say or show me live is likely to help in your case. I want your ()#*$*@#*@ code! In the unlikely event that your code is sensitive, or the more likely event that you think your code is sensitive, I can understand wanting to establish trust. If that is the issue, let me know in writing, and I can probably solve that in writing.

I could go on, but I've probably gone way too far. I joke with my favorite, regular waitresses that being nice is leading me quickly to starvation. I need to try screaming and throwing things. This time I'll really, or at least proverbially, do it.

For the record, I see the apparent conflict between waitresses / eating out and approaching starvation. That's another topic. I'm fairly open on here, but I don't want to go into that sort of gory detail. Approaching starvation is not quite literally true but it's too damn close. I am eating a lot less than I did a few years ago.

August 21

advice on a blog and email newsletter

This is in response to a specific request. Sorry it's taken so long. What you're asking falls somewhat outside of what I'd like to do, but I should give you something, and maybe I can find a way to help.

After I'd nattered on about the blog part, it occurred to me that some of the email services probably host your emailed blog entries as web pages. So see further below.

Regarding the blog, I have to suppress being snobbish and demand you learn HTML. This discussion started when you asked how I did my site. I replied that it's hand-written HTML. If you want to learn hand-written HTML, that would be great, and I'd probably help you under my apprentice (loose) agreement. That would only be part of your "problem," though.

Kwynn dot com is hosted on Amazon Web Services (AWS), which is almost certainly not ideal for me, and I would be hesitant to recommend to you for a number of reasons. You're coming from Gab, and I very much remember when AWS dumped Parler. Amazon's books department censors at least one key book. I've nattered about this conflict either in this blog and / or elsewhere, so I don't belabor it.

For sake of argument, let's say that AWS didn't tend towards evil. It still wouldn't be a good choice because this is a virtual machine with root access. That's like the sledgehammer to swat the fly. It's much more complicated than you need.

I am fairly sure that WordPress.org or some closely related website hosts free blogs. I have no idea what the terms might be, though, and I have no idea if WordPress tends to the above types of evil. I have not heard of WordPress kicking anyone off, but that only means so much. I dislike WordPress because it's the opposite of the sledgehammer, but that's another discussion. For your purposes, any of a zillion hosts with WordPress would probably get you started. Some of them are probably free.

For that matter, there are probably a number of free or cheap blogging sites, with or without WordPress.

As for email, you probably need to almost immediately go with any of the email services. I know some of them are evil, but I haven't kept track of which ones. The big names (whether evil or not) are MailChimp and SendGrid, and I know there are others. Those services help you design emails. For that matter, some of them would probably host the web pages that are identical to the email you send. So maybe that's your solution. Look up who the various players are along those lines, and you'll probably find something.

If you're mailing a few dozen people, I would guess these services are cheap. I would start to wonder when you get past $10 - $20 a month. If you're seeing much more than that, we can discuss.

Hopefully that gets us past that round.

Also, if we get a bit further, I may introduce you to one of my apprentices / sales consultants. This is the sort of thing he researches. He'd probably be happy to help.

August 15

another small payment

I got a second $50 from the same client yesterday. Yay! Again, it's something. This client says that "calls are unnecessary." My kind of guy! I am in discussions on future work.

yet more ranting on "call me"

In opposite news, I encountered three instances of "call me" last week. I want to run around waving my arms screaming or curl into a ball.

I was going to complain bitterly for a while, but I managed to write all the "call me" people in the last few minutes, where one writing was SMS. The whole problem is that writing them may not get me anywhere, but I am trying.

Via email, below is what I said to another of them. I texted him on Friday, but I guess it was too late (4:33pm).

I freelance and answer Craig's List ads because I am a night owl. It's hard to get me enthused to talk during business hours.

I was well aware that you wrote at 10:19pm, but you answered with the infamous-to-me first contact of "call me." "Call me" usually brings me to a screeching halt. I could describe why, but my fear is that "'call me' folk" do not appreciate too much writing.

With that said, you may not know that the Craig's List email relay strips out all "from" information, so I don't have your name, domain, email, or anything else. Part of my challenge is that "call me" has such broad possible context. I literally can't envision the first 60 seconds of the call without more information. There are too many possibilities.

If I had continued, I might have pointed out that he got my full name through the Craig's List relay--the poster (him) gets a name but not an address; the respondent (me) gets neither. Both my first and last name, separately, are very, very rare but not unique. I know of at least two other Kwynns, both of whom are women. (I am male, for the record.) One wanted to buy this domain. She didn't seem to understand how vast this site is, as if I were squatting on the name. I told her I'd start thinking about it for several thousand "dollars" / Federal Reserve Notes. We had a pleasant enough back and forth, though.

Years ago (15 years?) I saw references to maybe 3 - 4 other Buess families other than my close relatives. Put my first and last name together, though, and I can be 99.9999% sure that I'm worldwide unique.

The point is that he has everything and I have nothing. Furthermore, I wrote 379 words; his ad was 78 words. It got the basic point across, but it doesn't lead to further answers.

August 12

I just got paid $50 as agreed. It's definitely something. I installed a website from a code file and SQL dump through cPanel. There will probably be at least a bit more work, and perhaps much more work eventually. We'll see. Overall, sales are going very badly, though.

July 18

the "clock people"

This is a follow-up to the previous entry. In December, 2021, I started seeing my clock linked from DARPA Lifelog ("Facebook") and Big Evil Goo Tube. Days before New Year's, my traffic was some of the highest I've ever had. It wasn't expensively high, just relative to a tiny baseline. I rarely pay extra for a second GB of monthly network traffic. (The first GB is free.) At one point I figured I had 130 unique users running the clock within a few hours (or less) of each other. The traffic spike continued for days.

I posted a note to the clock to the effect of "This is cool and great and all, but WHY? Where did this come from all of a sudden?" And I got a reply. It involved the site Veve.me. (I won't hotlink to it for a handful of reasons.) It's an NFT site, but, as far as I understand, they use their own blockchain or some such. That's one reason I won't link to them; it seems like a slap in the face to an open system. That is the lesser reason, though. I'm trying hard not to go on one of my usual rants.

Ok. I'll briefly rant: "Pfizer, BioNTech enlist Marvel's Avengers in latest COVID-19 vaccine booster push" -- Oct 6, 2022. VeVe seems to focus on Marvel (and other evil companies), and I want someone (or several) at Marvel tried and executed for mass murder, let alone a number of people at Pfizer and BioNTech. Perhaps "a number" is generous. There. Rant managed.

In any event, when an NFT went on sale, the potential buyer had 4 minutes to complete the transaction. After that, the NFT went back into the pot for sale. The "clock people" were mostly using a side-effect of my clock--the counter of how many seconds since chrony info and sync data. When that counted up to 240 seconds, they could try again to buy. They didn't need ms precision time; they weren't using the clock for its original purpose.

July 14

It gets off topic from tech, but perhaps today the French began a useful revolution, rather than one motivated by the usual evil (Satanic) suspects. I haven't followed my "news" sources for the last few days. I know there was rioting, but I don't know if anything special happened on Bastille Day.

Now on to tech.

yet more musings on time

This started as an email to someone. Then I figured, as I try to do, I should make it a blog entry. I stay with the "you" voice.

I'm doing an email cleanup sweep and am about to address (in another thread) an email of yours from [days ago].  I got to thinking, though, about leap seconds again, given that I'd nattered about that at one point. The last leap second was at the end of 2016.  There will not be one this December; "they" announced that 10 days ago

"They" (IERS) make predictions for a year on the difference between UTC and UT1.  If this difference approaches 1s, they declare a leap second.  Based on the predictions, it looks like the highest predicted deviation for the next year (into next July) is roughly -0.034s. 

I mentioned leap seconds during my time obsession phase, or a more intense period of it.  I was just messing with my main time system on kwynn dot com a day or two ago.  I created a clock that I claim is more accurate (as qualified in a moment) than the time.gov clock.  Kwynn dot com is *not* more accurate than NIST, but my clock conveys the user's device's clock error more accurately than theirs does. 

I actually have users of my clock, but they don't use it for precise time; that's another story. 

Once I created that clock, I figured I had to monitor kwynn com versus the NIST servers.  That can be painful.  Chrony is the NTP (network time protocol) system I use to sync kwynn.com's time. Based on the URL scheme, I started my chrony monitoring (chm) project in October, 2021. Since then, there have been 3 - 4 incidents where the Maryland time servers became unreliable or kwynn.com drifted as much as 4ms from NIST. I've joked that if I took this a bit more seriously, or I was somehow paid to keep it accurate, 4ms would be enough for seppuku.

Only a few weeks ago, I finally gave up on the Maryland servers and used the "global" time.nist.gov address. Ideally I should now display which NIST server I'm pulling from; perhaps one day. I have to keep raising the error (lowering the tolerance) before my email alarms are triggered, because it's hard to keep a lock on a Colorado time server from kwynn.com which "lives in" northern VA. The Maryland servers are something like 30 miles away.

kwynn.com primarily syncs to AWS' timeserver. That is, I have AWS set to a more frequent polling rate than NIST. I assumed at the time that AWS' server got 4ms away from NIST. Now I'm not so sure; there are other explanations for that apparent drift. Before that incident, chrony wasn't set to NIST at all. Whatever the case, though, I have various systems that poll NIST, including chrony.

I'll try to wind down the nattering. The point is that it's hard to be sure of accuracy.

June 22

This is one of my checklist entries. My changes to /opt/kwynn have been growing. /opt/kwynn is also known as "php general utilities." That is, I have been changing /opt/kwynn for a paid project and not merging the changes back to kwynn.com. So, changes:

  • I think just the testing timestamp on kwynn email. Should not be important. Just make sure I get an email in the next few days. "Kwynn email" is the object I use to email myself sysadmin messages such as Ubuntu needing updating or my time server not being able to get enough NIST polls.
  • isKwGoo change - This protects admin parts of my site. Lots of ways to test it.
  • inht() change - I'll have to think about that. It's unlikely to break anything because it's likely been tested locally.
  • kwifs() - same
  • dr() - document root. Hmmm...
  • lots of testing needed for cron and web SNTP.

June 16

I got a complaint that my previous entry was not clear enough, so I will try to clarify. With the exception of a bit of SQL to clean up a small mess on May 31, I didn't write a single line of code from May 7 until today. With those exceptions, I have not been coding at all. I can't justify unpaid coding right now.

In the last few hours I have done a bit of simple coding to rearrange which NIST servers I'm using to track my server's time. The record is in GitHub.

June 14, 2023

Oracle Corporation

A search of my site shows that I haven't bitched about Oracle yet. (I wonder if I'm missing something.) "Bitching" is generous. Hoping for a literal rain of hellfire and brimstone is perhaps a closer sentiment. In my personal blog entry of May 15, I expressed something between concern and hope that Silicon Valley will be nuked. (I have seen Oracle's database-icon-shaped office buildings in Redwood City.) Perhaps they'll get what's coming to them along with their "peers."

I earned a decent bit of money, many years ago, using Oracle's RDBMS. Thus, I liked Oracle for years. I thought it was cool seeing those buildings in Redwood City in 2000.

Then along comes "Covid." Analysis of that situation didn't require paper and pen, a calculator, or a spreadsheet, let alone a database. It didn't require calculation at all. I looked at a numerator and a denominator on March 28, 2020 and could tell they were both lies, but Oracle had a "vaccine" mandate for at least several months.

The purpose of governments is to promote Satan worship--a form of "oppression," if you will. The purpose of "health" "care" is to maximize the efficiency of draining people--their health, life force, money (which represents life force), etc. The goal is not just to kill them; it's to make sure they die slowly, miserably, and are bankrupt. The purpose of a database company like Oracle is to comply with Satanic oppression, fail to do trivial data analysis, and send their people into the "health" "care" system or directly to death. (I suppose someone who is found literally stone cold dead still gets a very expensive ambulance ride, so there are "health" "care" expenses.)

I'd like to see the corporation tried and seized for mass murder and see their stock price go to zero. Then perhaps some of their people need to be tried and executed for mass murder.

coding update

This is addressed to one person who asked, but I might as well make it an entry.

My coding is not going anywhere lately. My last GitHub entry, including my 7+ year project, was May 7. I did a bit of SQL for "7+" on May 31 when May's bill had some trouble. I barely consider that sort of thing "coding," though. I of course am rewriting that application in MongoDB, but the live version still runs MariaDB. It went from MySQL to MariaDB with an upgrade to Ubuntu 20.04, as I recall. I just did that upgrade a few weeks ago. I can't do the upgrade beyond that because that would install PHP 8, and Drupal 7 breaks with PHP 8. That was the final straw that caused me to start rewriting it.

For 7+ years, I've almost always been "ahead" of my hours. I've been on auto-pay for much of that time, so I try to stay ahead of that payment. As of today, I'm 10 billable hours behind. Right now the plan is to get back to it when the semi-automated check arrives. It's automatic on my client's end, but robots print zillions of checks a day, put them in envelopes, and snail mail them through Rothschild's Postal Service. (Jordan Maxwell (1940 - 2022) talked about this privatization. He said the US Post Office has roughly 7 employees as opposed to the US Postal Service. That is, the original Office still exists, perhaps to interface with USPS. This fact may or may not be trivial to prove.)

I would hope my client considers himself at least somewhat lucky that I've worked 5 hours a week for the last 3.5 years. Given that this isn't enough to live on, it's fair to get somewhat behind when I'm actively searching for more work.

As for my personal project coding, I have trouble justifying it and getting excited about it. I need to keep at that search for work. History has tentatively shown that I get more done overall when I am working on my personal projects. I feel somewhat productive, so it leads in turn to more productivity. Perhaps I should think hard about that. Still, if I'm sitting at the computer these days, I feel I should be job / gig hunting.

My job candidacy at that one company is still technically open, but I haven't heard a peep since May 10. The company says that if one gets deep enough in the process, they will do their best to find a candidate a job at the company, even if it's not the one the candidate applied for. Thus, I am still optimistic that I'll get farther in the process. I have pretty much given up on the "when," though. I've also given some thought to applying for another job at the company, to help that process along.

I haven't put a "real" job application in since June 1. I'm now at 69 applications since November 1. I have answered roughly 15 Craig's List computer gig ads since June 1. I really would like to crack that nut, or at least put cracks in it. I have considered another couple of candidates to help me solve the "call me" problem. Maybe there will be progress on that in the next 24 hours.

June 3

I saw an ad seeking computer services. The ad specified a certain religion. I have no problem with that, especially given the context. The potential client thought he was going to get Ubuntu Linux. That decision was made before I got involved, so it seemed a good start. Then he writes back to say that it turns out someone installed SatanSoft on his server.

I got stuck on his religious question because it's not precise enough. The religion, like almost all religions, can be interpreted too many ways. I got a long ways into an essay trying to explain my belief to find out whether it was close enough to his. Then I find out about SatanSoft. What good is his religious filter if that's the result? I asked him if it was an approved member of his religion who installed SatanSoft.

This is another demonstration of why I should not be freelancing, or that I at least need an intermediary in many cases.

May 26

I mentioned UTC to one of my former apprentices. He said he uses it because he works with servers in different timezones. This is my reply.

I've very rarely worked in different timezones. I'm often dealing with the UNIX Epoch timestamp, though, which is in UTC. As best I've ever been able to tell, MongoDB doesn't have a date data type, so I use the UNIX Epoch. Now that I'm writing this publicly, I should add that the Epoch has served my purposes just fine. Occasionally I look again for a MongoDB date type, and I'm still reasonably sure there isn't one.

Actually, I use the Epoch as the true / primarily / always-used field and then I generally have a denormalized / human-readable / redundant secondary field so that I can keep track of what's going on in the database more easily. I usually have a human-readable date as part of the _id / required primary key, too. The date may change, and one is not allowed to change _id, only delete and insert a new row, but that's fine. It still makes it easier to keep track of what's going on. Again, I use the Epoch as the true field on the matter, and everything else is an aid for me when looking visually and not used in calculations.

Also, I have my small suite of time services, and I use the Epoch with those, too.  JavaScript does the Epoch in milliseconds, and I wrote my PHP extension to do it in nanoseconds.  I've seen various Epoch timestamps literally 10,000s of times, I would guess, although it's not like I'm paying close attention usually.  I've at least glanced at timestamps many, many times.

A few years ago I got bent out of shape dealing with the NTP timestamp format which was formalized in 1985. At that point, 32 bit PCs were a few years away, let alone 64 bit. That format is 2 X 32 bit fields. "NTP timestamps are represented as a 64-bit fixed-point number, in seconds relative to 0000 UT on 1 January 1900. The integer part is in the first 32 bits and the fraction part in the last 32 bits..." I've spent a bit of time in at least two languages and a few rewrites turning that into the UNIX Epoch.

One interesting note is that the NTP format will roll over in late February or early March, 2036, if my rough calculations are correct. Perhaps I'll shepherd the new NTP format and the transition code for that. If there are any signed 32 bit implementations left in January, 2038, the UNIX Epoch will roll over. A reading of that article shows that it's a tricky problem. I may be dealing with it, too.

I am well aware of the difference between hard technical problems and seemingly fantastic tales, but John Titor either implied or stated that one goal of his time travel missions was to bring back (forward) 1970s computers to help them deal with the 2038 problem. I'm not sure that it was of vital importance, just helpful. His timeline had a degree of civilization breakdown, so perhaps it makes more sense. I recently mentioned Titor in my personal blog.

May 14

a gig application

Rather than send a private message, I might get re-use out of this, so I'll be both public and somewhat vague.

Will your server use Linux, as opposed to Satan's Operating System (known to the shod, masked, vaxxed zombies as "Microsoft Windows") or the iCultOS? Linux is free as in beer and speech (free and open source). It's "EULA" allows anyone to use it, distribute it, etc. I find it annoying that someone mentioned SatanSoft with a straight face a few posts before yours. No one on the relevant site should be using SatanSoft or, arguably, the iCultOS.

Python and (I'm almost certain) Flask are (free and) open source, but they are not an operating system. Flask was reasonable advice. It's not what I would use, but it's reasonable. I applaud the notion that you're going to try to work it on your own, but, no offense, I'm not sure it's feasible.

I am male, so check on that. If other factors look good, I may write an essay on whether I'm Christian or not. I realize there are definitions that seem trivial for those brought up in the religion, but they are not trivial for others.

If you're looking for volunteers, I hope you find them. I can help for a trade or relatively little money, but I cannot do it for free, for reasons I'm nattered on about at some length on this site, but I'll repeat if needed. I would literally work for food and a place to stay, but showing up in person raises the stakes around compatibility and such. I'm going to assume you are west of the Mississippi. I found your website, but I didn't quickly find your location. I'm near Atlanta, but, still, under the right circumstances, I would show up in person. We can discuss.

Even before discussing the money option, I should discuss the apprentice trade. Again, I applaud your wanting to learn, and I'd consider trying to help you learn, but I don't think it's feasible given that time is a big factor. Do you have someone else in mind as an apprentice? I would give quite a bit of help in trade if the apprentice helps me.

If we take the money route, I don't want to discuss potential "dollar" (Federal Reserve Note, Rothschild toy money) amounts publicly.

April 29

A new Ubuntu upgrade checklist:

  1. After the Ubuntu upgrade, upgrade composer for MongoDB. On my local machine, I saw a subclass definition conflict when I upgraded from Ubuntu 22.10 to 23.04.
  2. Test web message form and email at the same time.
  3. clock and stats - looks like /var/log/chrony may have a new permission issue
  4. quick email

April 28

I have likely found an investor for projects. There is no need to go into details as to what we're working on. For one, we're not entirely sure yet. This will not be a full-time job, I will keep looking for "real" but night jobs, and I'll almost certainly still be available for at least small projects. Or I'll be available for larger projects if you can outbid the investor. That may not be hard, as I'll explain. Meanwhile, there was no feedback on the IQ test today, so the business week has passed on that one.

My investor wants a "dollar" (Rothschild debt note / Federal Reserve Note) number of what I need to stay afloat. I told him that he could get a lot of work out of me just by accomplishing "flotation." That's one of several reasons I get a bit irritated with the question "What's your rate?" Expenses are by the day and month, not by the hour. I have always been willing to do quite a bit of work for a steady monthly fee or a barter, even if it's 1.2 orders of magnitude lower than dev pay. I don't want to do that forever, but it would be a big improvement over what I've been doing, or failing to do.

I've never calculated that exact number. I just did. I'm not going to give the number, but it's depressing. Real dev pay would solve my problems several times over; it's not like I spend that much. Freelancing has just been that disastrous from a sales point of view. A summary of how I got into this mess is in order, or at least some highlights / lowlights.

I'll somewhat arbitrarily start in 2020. I want Billuminati Gates and at least a handful of others tried and executed just for one specific series of events they urged the shod, masked zombies to destroy in 2020. I have probably not grasped how bad that was for me. It is my equivalent of an annual religious renewal of faith. It was totally destroyed in 2020, and effectively destroyed in 2021. In 2022, kids who were 22 years old and in perfect health were wearing masks. I wouldn't have tolerated that. Worse yet, there may be "vax" requirements involved, in which case I would probably be in some people's faces for their gross idiocy. One way or another, I might manage to get myself arrested arguing with others involved. Their official policy needs to be for Gates and company to be tried and executed, or I can't stand to be around them until his execution is made public.

The point being my faith has not been renewed since 2019, and now I'm getting increasingly angry at my peers in the matter, but I don't even see them as peers but shod, masked, vaxxed zombies. (They weren't vaxxed in 2020, of course, at least not with the latest deadly cocktail.)

I think I've mentioned in this blog the one project disaster in the summer of 2020, then 3 disasters in the summer of 2021. I got a couple of small projects last year. Also last year I tried as hard as I've ever tried to get freelancing projects. I pushed hard, relative to my limited capacity on the matter, a number of times. In November I didn't refresh my paste buffer and sent the wrong email to a potential client. The email was meant for someone else. I was deliberately sarcastic to the "someone else" because they deserved it. I figured there was some chance they'd see what devs were reading into their ad and change their tone. The actual recipient was understandably offended. That was the point I realized that I'd better stop the freelancing route. My sales skills really are that bad.

Overall, I just kept thinking that my sales couldn't go so badly for so long. If I just worked a little harder, I could make it work. In November I started look for "real" but night jobs. I've applied to 55 jobs since then. That resulted in 2 interviews and perhaps 1 - 2 other human responses. One interview series is still in progress, as I've previously described.

So, another point being that I didn't realize the real job path would also go this badly for this long. The ongoing interview series / candidate process may wildly succeed 3 weeks from now, but, until then, the hunt as a whole is a failure.

There is almost always more to say. Part of the point of this blog is to whine. That's enough whining for now, though. I need to get it through my head that my investor and I may accomplish great things in the next few months, or at least much better than what I've been accomplishing.

April 27

job hunt update

As far as I know, I passed the written interview. At least, my hiring process contact sent me the next step of the process: an IQ test. I got the test link this past Friday, April 21. They also sent a documenting explaining the various parts of the test in detail. I spent a decent amount of time from Friday through Monday (early) morning writing a web application to reproduce the test, so that I could drill myself. The application is in my public GitHub, but I won't put a spotlight on it right now. I took the test very early Tuesday morning (00:45 - 01:30 or something like that). I haven't received any feedback yet.

the latest "call me" incident (first version 20:05)

Here was the Craig's List ad:

cypress automation tester

Compensation: $600 [in an area nearby to me that is fairly wealthy; somewhat after midnight on a Sunday morning]

I’m looking for someone who's pro-efficient in cypress and can write automation tests. This shouldn't take more then a couple of hours and it's fairly easy task for the right person.

The timestamp was encouraging. The "I" voice is encourging. On the other hand, I tended to assume a small organization because I still can't figure out why an individual would want formalized testing. I had to look up Cypress specifically, but it's a close relative of Selenium, which I briefly worked with. It's software to emulate someone using a browser on your site.

I thought it might be Cypress microchips when I first read it. I worked with those years ago. In any event, here was my reply:

[sent almost 21 full days after his ad on a Saturday night around 10:30pm]

Hi [common English male given name],

> pro-efficient... Posted 21 days ago

Hopefully you'll forgive me when I chuckle at those 2 datum. I've seen that hundreds of times if not thousands. Rather than uber expert in the precise needle you want, how 'bout an argument that I can do it reasonably fast in calendar time, and I would be very happy with $600 even if it takes much longer than 2 billable hours?

I have done a bit of Selenium automation, and I've done a lot of automation the hard way in that I wrote the direct HTTP commands to scrape a website. I think Selenium or something similar existed at the time, and perhaps that would have made my life much easier. I can trivially find on my hard drive (SSD) 25,000 lines of client-side JavaScript I've written, and that's maybe 75% of the total. I take it from brief research that Cypress is Node.js / server-side JS. I have done just a bit of that, but it's not like they are vastly different, and I really need to get fluent in Node. I've dev'ed in 9 - 12 languages, depending on how you count, spanning let[']s just say decades.

The reason I freelance and answer CL ads is because I'm writing you after 10:30pm, which is early in my ideal workday. That is, I am quite the night owl. 9 - 5 hours don't go well for me. I see that you posted rather late, so it gives me hope.

Hopefully that will do for a start.

Kwynn

His reply was 25 minutes later, around 11pm (Saturday night):

Hey man,

What’s your number. I’ll call you when you're free.

Regards

Sent from the all new AOL app for iOS

Envision a common car going 70mph coming to a screeching halt. Yes, wanting to talk at 11pm is encouraging, but that sort of reply just stumps me. For more specific context, I had submitted my written interview about 46 hours before this. My committment was towards a "real" job, but I was answering a tiny number of Craig's List ads as a rear-guard action. "Call me" has been a bane of my existence for years. ("Video call me" is of course an order of magnitude worse, and I get that one, too.) I once again ran through all the times I've suffered when I actually called them in response.

I'm not going to review my own blog, but I think that "call me" without quotes brings up quite a bit of fussing on this matter. I'll restate it, though. I have found the hard way that I am not almost always not compatible with people who want to jump right on the phone and offer no alternative. It points towards all sorts of personality incompatibilities. In fact, I told him as much. It took me quite a few hours to respond.

6:30pm the next day (~20 hours after our exchange)

I have found a number of times that I am not business (or otherwise) compatible with people who want to jump right on the phone. Is there a point in sending you written questions? Would you like to ask questions? I'll talk indefinitely if we get that far, but I just can't envision the conversation at this point. I once talked to a guy for 7 hours, and the project still ended in total disaster. You would think 7 hours would establish a basis, but it did the opposite. It lulled me into a sense that I could communicate with the guy. When the work started, the conditions changed, though.

I don't answer very many CL ads lately, in part because of this. I started looking for nighttime "real jobs." I suppose the point being that you're in the majority, and perhaps I need to totally quit CL and stop wasting all of our time.

I even wrote him 4 days after that asking a shorter version of whether sending him questions would accomplish anything: "If I send a handful of questions, would you consider answering (in writing) at least one? That might be all it takes."

solutions?

April 13 - status report

Looks like I never followed up on my January 20 entry. The result of that interview was "We don't think you're right for this job, but we like you as a candidate, so please find another one at our company to apply to." Grrrrr... In terms of potential earning power, I was generally way over-qualified for that job and specifically slightly under-qualified. So, they might actually mean it when they say apply to something else. The part that I'm growling about is that they should be the experts in what I should apply for. Why would I spend the time starting the whole process over again?

With that said, I finally found a company that agrees with that sentiment; I'll explain in a moment. I am now up to 52 job applications from November 1 to March 10. I got a second interview. I will not name the company for at least two reasons. In any event, this company very specifically says that may route me towards another job during the hiring process.

The interview was written with no time or space limit. I spent a long time and finally submitted it this morning. I don't expect to hear from them until Tuesday or Wednesday.

I also got a great response to my Gab ad for my web services. That's just a link to Gab itself; I don't want to ponder my ad right now. My ad is under "Marketplace." My potential client is already running into potential regulatory issues. I offered him another perspective on the matter, but his original idea is probably on indefinite pause. I think he'll be a great contact, though.

I also also (sic) encountered a handful of potentially useful recruiters, but no results yet. I am not quite sure where I'll look for work next. I hope to get back to relatively focused activity in the next few days.

In answer to one question from a potential apprentice, I am still living exactly where I have been for a few years, despite several periods of serious concern that I'd have to move.

March 4 - AWS EC2 upgrades

An instance I am responsible for is still on Ubuntu 18.04 LTS. Support expires in a few weeks on April 30. I know why I'm not on 22.04: sometime after 20.04, Ubuntu upgraded to PHP 8.x, and the software involved will not work in 8.x. I don't remember how long I tried to get it to work, but I concluded it was time to do something completely different. I'm in the process of that, but it will take a few more months. I'm being vague because there is no reason to paint a target on this system.

With that said, I am struggling to some degree to remember how I never upgraded to 20.04. Actually, I do know, but that's another story. In any event, I have been using test instances to make sure that the system will work in 20.04. The tests look good so far. 20.04 uses PHP 7.4.

AWS EC2 ENA

Here are some upgrade notes. One problem is specific to AWS EC2. I can't use certain newer instance types because they are grayed out with "This instance type requires ENA support." ENA is Amazon's "Elastic Network Adapter."

Without ENA, I can use C4 (compute) but not C5 instance types. The C4 type does not allow the "EC2 serial console," which is quite handy. It gives the real-time boot and shutdown info of an instance. The previous "Get system log" was delayed by minutes. (These are under a given (one) instance in the Dashboard, then "Actions" and "Monitor and Troubleshoot.") The serial console is reason enough to get ENA set up.

The "modinfo ena" command will likely show that the instance (even Ubu 18.04) is ready for ENA, but that isn't enough. If the instances is ready, I get 19 lines including "description: Elastic Network Adapter (ENA)." The next part is mildly tricky and annoying.

To use ENA, even if the instance is ready as above, you need to set an attribute at the "aws" command-line level. (There may be alternatives.) The instance that you want to ENA-enable must be "stopped" when you set the attribute, so the aws command must come from another instance.

The other instance must have AWS IAM permission to set the attribute. You'll need a user or role with permission ("policy") like this:

{
"Version": "2012-10-17",
"Statement": [
		{
			"Sid": "VisualEditor0",
			"Effect": "Allow",
			"Action": "ec2:ModifyInstanceAttribute",
			"Resource": "arn:aws:ec2:*:12ishdAWSAcct:instance/*"
		}
	]
}

Then you need to give the "other" instance that permission. I recommend giving it a "role" when you launch it. In the instance launcher, that's under "Advanced details" after instance name, AMI, type, ..., network settings, "Configure storage." Under "Advanced details," it's under "IAM instance profile." The instance runs with that role, and in the IAM screens you give the role that policy. A possible alternative is to give that policy to a user and then use "aws configure" at the command line to assign that user / password. (That should work, but I almost always use the role method.)

Once all that is set, you need the "other" running instance and the target instance needs to be stopped. THEN you run "aws ec2 modify-instance-attribute --instance-id i-0abc1234567 --ena-support" As I remember, a correct result is silence--an error will be "loud," but no result means success.

THEN you can take an image of the target--the newly ENA-enabled instance--and the image can be run as c5ad.large or whatever, with a serial console.

other upgrade notes

Going from Ubuntu 18.04 to 20.04, catch up with any 18.04 updates, then you may have to reboot before the "do-release-upgrade". I got scolded for failing to reboot.

Also, after the upgrade, you have to specifically install MariaDB: "sudo apt install mariadb-common". Apparently the transition from MySQL to MariaDB is in there. MariaDB replaced MySQL when Oracle bought MySQL. The community feared sabotage of the open nature of MySQL.

When upgrading from MongoDB 3.x to 4.x, create a dump file (below), stop the server, erase all the actual database files in /var/lib/mongodb, upgrade MongoDB, restart the server (if needed), and do a restore. That will save you much grief. Simply trying the upgrade gets you core dumps and other mysterious errors, and MongoDB just won't work. When I say to erase all the files, I assumed you are thoroughly backed up and such.

I only cared about one database, so it was "mongodump -d mydb" and then "mongorestore" from the same directory (a "dump" directory is created by mongoddump. You restore from the parent of "dump," which is the same directory from which you created it.) I created "mkdir /tmp/blah" and then "sudo mv /var/lib/mongodb/* /tmp/blah" I do it that way so that a recursive delete is not in my command history because I use the history so much. (I don't want to accidentally run the wrong command.) Here are Mongo's instructions for Ubuntu installation. The relevant commands are:

wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo apt-key add -
echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list
sudo apt update
sudo apt dist-upgrade

February 21

Open Metaverse

This thread and the concepts and possibly products (Open Metaverse) around it are extremely important, at least on certain levels. That's Twitter user @punk6529 posting 5 days ago.

Perhaps more to come.

January 20

job hunt

Several topics tonight. For one, I have more or less given up on freelancing. If someone wants to send me a project, great, but I've more or less stopped actively looking. I have applied to 39 jobs in the last few weeks. All of them mention the night shift or are specifically night shift. Application #34 on December 30 got me a phone interview 8 days ago (1/12). I think it went very well, and I was even inspired to send a thank you note. I haven't heard anything one way or another, though.

I got two other human responses, one of which was somewhat by accident. A good number of them have specifically rejected me. Another number are just floating without any word. For one, I keep getting Indian recruiters trying to recruit me for it, when I applied on November 23. I answer them, but ...

my Big Evil Goo Mail checker

I have been thumping hard on two of my GitHub projects for the last several weeks. Weeks ago, for my main client, his refresh token for Big Evil Goo OAuth(2) might have been invalidated, or perhaps I just assumed it would be. I went to refresh it "manually" with a Python script, I think written by Goo, years ago. In related news, for months and months and months I kept getting emails from Goo about "out of band" authorization. FINALLY I understood the hard way that meant "manually" / "by hand" / using that Python script. So I had to work on getting auth via the usually back and forth with Goo account sites. (What in hell am I supposed to make of "out of band"?)

So, to fix the immediate problem of my client's application being locked out of Goo, I adapted my email checker to catch the refresh token and related stuff. This led me to start generalizing / abstracting the email checker. That is, I started separting the Goo OAUTH stuff from checking email. This got somewhat out of hand.

The whole business of the Goo checker is a dilemma. I call them Big Evil Goo for a reason, and it's not at all a joke. Goo's is competing with Mao and Stalin for the highest body count. They manipulate information in favor of the usual Satanists. But that's another topic. I go into this elsewhere to some degree.

Perhaps it's awful that I can find little technical problems compelling when ignoring huge ethical issues, but THAT is also another topic. In any event, I should back up and explain what the "checker" is for. To this day, Android + GooMail do not do well at new email notifications. That is, who knows when my phone will finally get around to telling me about an email. Worse yet, swiping down in the GooMail app is no guarantee. There is no way to know for sure from a phone whether or not I have new mail. I built the checker to solve that. I have to actively poll, but that poll is accurate every time. Perhaps 0.5% of the time I run into some timing / refresh / propagation issues, but those are clearly such. My little checker WORKS, darn it! Not only does it work, but it pleases me every time I use it, which is about 20 times a day, perhaps a few more. (I of course have the data to know that number.... Oh God no! .....)

In any event, first I abstracted out various layers of OAUTH versus other features. Then I decided to solve another small problem. My little system works, but it had at least one quirk. I'm encrypting the refresh / access tokens. My database has an encryped version. The user gets the decryption key as a cookie. I check email from 3 devices. The way I had it set up originally, only the device that first authorized my application with Goo got the "refresh token." An "access token" lasts for an hour. The refresh token will refresh access tokens for months or years. So, other devices sometimes needed a short version of the auth process--not the whole version.

I seemed to have solved that problem now, although I haven't focused on my recent results. That is, it may or may not be near-perfect yet. In any event, my current solution goes like this:

  1. The device that (re)authorized my application gets the refresh token.
  2. When another device uses my web app, it leaves a public key on my server. The private key stays on the device as a cookie.
  3. When the original device checks email again, it encrypts the (symmetric) encryption key for the token with the public key. It does this during the moment it's using it; then my server throws it away again.
  4. When the secondary device uses the app again, it has a sym key waiting for it that it decrypts with its private key cookie. The decryption is done on my server, but then the relevant keys are thrown away in my server and only exist as cookies on the client device. That is, other than the moments the client is using my system, my system can't access a client's Goo account.

Again, the public key part seems to be working. Now that I've invested this much in the checker, it's nice to keep developing it, even though it involves mixed feelings.

my time machine

One of my apprentices jokes that everything I do is a time machine. One of my web apps keeps track of my client's billable time, for example. And then there's my clock. Both its upstream and downstream causes and effects have been quite an obsession for a while. (In related news, I've been studying how GPS works. It has to do with accurate time, of course.)

A few months ago I discovered "$ sudo apt install sntp" and then "sntp." That would have saved me some trouble, although hopefully my latest version is slightly better in terms of more accurate results. That is, I now have a daemonized SNTP client that sleeps in the background until asked to check kwynn.com's system clock versus NIST. I can also call the daemon from the command line.

My latest problems started when I was getting weird results from the command line. I feared that I was getting the KoD (kiss of death) packet from NIST. That's the server telling you to leave them alone for a while. Now I check for KoD, although I can only see it with my eyes. My program doesn't know what that means yet. The problem was not KoD, however. I'll come back to that.

I had the KoD detection working fairly quickly. While I was at it, I started rearranging how the output is done. It is much cleaner now. Then I was still getting weird results. I finally tracked it down to the fact that, as best I can tell, one should also open, read / write, and close a FIFO, at least how I'm using it. I need to proverbially tattoo that somewhere. That is, don't leave the FIFO open.

Once I'd sorted that out, it seems that some of the NIST servers are somewhat less reliable. What I saw from the start of this process was simply UDP packets being lost. The Ubuntu-installable sntp was having similar trouble, although it seemed it might have been having less trouble, which would be disturbing.

It seems that IPv6 packets are far more reliable. That makes some sense because I get the impression that IPv6 is still less used. Upon research, it's still VASTLY less used. It's only a few percent of traffic as of 6 months ago (go to the bottom of the article).

Perhaps I will add functionality to asses reliability and choose servers accordingly. Better yet, maybe I should just use the 2 IPv6 Gaithersburg, Maryland addresses and be done with it.

2022

December 31 (posting 23:01)

I am using ancient hardware. I bought this computer (that I'm typing on) used for something like $160 in 2017, but it came out in roughly 2009. It may have been a $2,000 - $3,000 system then; I'm not going to bother to look that up.

In any event, this has caused me at least two problems, and I'm sure more will come. One problem I list way below regarding MongoDB 5.0 and a CPU instruction I'm missing.

I *think* I have now solved the other problem, or at least plenty well enough until I get new hardware. First of all, the Ubuntu package log can be accessed with "tail -n 500 /var/log/apt/term.log" When the grub-efi-amd64-signed package gets updated, I've been having problems. Specifically, "mount: /var/lib/grub/esp: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error."

I have found the hard way that you do NOT want to leave those package errors hanging for days. Eventually your system will break in ways that even I have given up trying to unscramble.

Today, after going around and around (for only 20 - 30 minutes or so), I found that the problem is in /etc/fstab . More specifically, as I've copied partitions when I get new drives, the old fstab is no longer accurate. (If you have an old UUID entry, you don't need a UUID. You can use the /dev/ name.)

The solution to all this involves having both an old fashioned "i386-pc" partition for grub and an unused (as far as I can tell) but properly set up efi / uefi / esp partition.

This part is unnecessary to fix the package install problem. I'm just listing for the record what you need for grub to work in this situation. In the "gparted" program (runs from the GUI / "sudo apt install gparted"), the "old fashioned" partition I have is 250MB (MB, not GB), although I think it can be much smaller. I give it the "bios_grub" "flag" - after creation, right click and "Manange Flags." As I remember, you don't have to give it a filesystem. After grub runs (see below), the filesystem type will be labeled in gparted as "grub2 core.img." I think that's just a label / name and not any sort of filesystem type. (I don't think the filesystem commands show any filesystem at all.) The "bios_grub" flag is what is needed for grub to find the partition and install itself. The partition should only have that flag set.

Today's solution was to create a partition for efi so that the above package install will shut up and work. You (may) need 2 packages installed to create an old-fashioned FAT32 filesystem. "sudo apt install dosfstools mtools" Before I didn't have mtools, and that resulted in a red exclamation point in gparted.

Once I had those 2 packages installed, I created another 250MB partition (9.1 "MiB" used right now) for efi. Format it to fat32. Give it only the "boot" and "esp" flags. Then make an entry (add a line) in fstab such as

/dev/sda6   /boot/efi  vfat   umask=0077      0       1 

Where /dev/sda6 is the efi partition I just created. (You may have to create /boot/efi .) Once your partition and /boot/efi exist, either manually mount or automount or reboot. Then either redo your install via something like "sudo apt dist-upgrade" or reinstall specifically with "sudo apt --reinstall install grub-efi-amd64-signed"

When I reinstalled, I still got warnings:

Setting up grub-efi-amd64-signed (1.186+2.06-2ubuntu13) ...
Installing grub to /boot/efi.
Installing for x86_64-efi platform.
grub-install: warning: EFI variables cannot be set on this system.
grub-install: warning: You will have to complete the GRUB setup manually.
Installation finished. No error reported.	

I don't care about the warnings. It ran without breaking my packages. I don't think there is any need to run grub commands or worry about further setup or EFI variables or anything else. I did run grub only as a check. Partial results are below, just for reference.

$  sudo grub-install /dev/sda ; sudo update-grub
Installing for i386-pc platform.
Installation finished. No error reported.
Sourcing file `/etc/default/grub'
[a bunch of lines where it finds various Linux installs]

Recap below. The first part isn't necessary to solve the package problem; it probably already is set up as such. I'm just listing what you need for both i386-pc and EFI. Remember that this is for a situation where you have very old hardware with no EFI capability.

  1. An old bios partition of 250MB is probably way too much, but it works. It should have one flag "bios_grub". No need to worry about its filesystem format / type.
  2. Make sure mtools and dosfstools are installed (Ubuntu packages).
  3. Create a 250 MB (or perhaps only 25MB - see above) partition, formatted to fat32. Give it flags "boot" and "esp." This is the EFI partition.
  4. Enter the EFI partition in /etc/fstab as shown above.
  5. Manually mount, automount, or reboot. (If you manually mount, you can't be 100% sure your mount command is the same as the fstab entry. )
  6. Reinstall the grub-efi-amd64-signed package as above.
  7. Only if you are "starting over" without a pre-existing bios partition, install and update grub as above.

December 2 - continuing the job application

I just submitted the job application from my previous entry (Nov 22).

November 22 (posting 11/24 00:20 or later)

Note that I'm posting something new, further below, that I wrote on October 31 and didn't post until now.

I have given up on freelancing. There is a subset of sales skills that I don't have, or perhaps it's more accurate to say they were beaten out of me. I have been searching for nighttime tech jobs, and I'm finding a wide range. One of them has a prompt / question that I haven't seen exactly the likes of. I'm going to write / post my answer here. I won't give the exact prompt for a number of reasons, although you should be able to more or less figure out what it is. There is a character limit. I hope to stay well away from the limit, but we'll see what happens.

It appears I never linked this here, but a relevant document is "Why I'm a master software developer."

the job application answer

As of 2022/12/02 19:45, the following has been submitted. It did not change much from the 11/22 version.

  • I fixed dozens of bugs in software used at a gold mine, power plant, oil refinery, copper smelter, etc.
  • My software has been demoed at the Consumer Electronics Show. I consulted with Sony Playstation developers on adapting that software.
  • I wrote a literal digital assistant / automation web application for a lawyer. My system has been cheaper than hiring a human assistant by automating several tasks. It's processed over $700k over 7 years through a 3rd party billing system with no significant errors.
  • A few of my technical articles have been #4 - #6 on Google for 7+ years for not particularly obscure keywords within their technical sub-sub-field.
  • A number of times I have sped up code by 60x or sometimes near-infinitely.
  • 17 voicemail providers contacted by a client could not provide a voicemail over 10 minutes. I gave him indefinite recording capability.
  • I've written a wide variety of successful software: a USB device driver, an NFT contract, Cardano Ada crypto stake pool, two browser extensions, a PHP extension, a true random number generator, an SNTP client, a time service, allowed access to patient data after end-of-life of the previous system, and wrote a (mobile) web app to be absolutely sure whether you have new email or not.
  • I have a minor mention in the "Acknowledgments" section of a NASA paper for my code.
  • I started programming in '83 - '84 at age 8 - 9 on a TI-99/4A with 16KB of RAM. I remember what I did; it was basic in BASIC (pun), but it was real programming.

October 31 - November 22 - a system design sketch (micro-specification), a job application, and a rant, all in one

I wrote the following on Oct 31 and then got sidetracked. I'll skip the rant for now, although a rant is in order. I should post late on November 22.

I came across someone who has a plan for prosecuting the "Covid" criminals. It involves a request for a software specification, so I'll start with that.

The request is for a searchable database of evidence and software that can help organize the evidence. One question is whether we can assume that phase one will only deal with data that can be widely / publicly distributed. That is my vision for phase one. Otherwise put, I don't want the burden of trying to protect the identity of anonymous sources and confidential data, at least I don't want it to start. Someone else would have to take that thread, or we'd have to wait until I am ready to be paranoid enough, which may be a matter of a few weeks. I think a lot of this could be done in very few weeks, even if I'm working alone.

I am assuming that the database will eventually come under heavy attack, and we should assume the original store of the data will be destroyed-- thus the part about wide distribution of both the data and metadata (searchable index / various ways of looking at the data). A mechanism to distribute the data should be there from nearly the beginning.

The base product could be a web page with file upload capability, a text form (for several thousand characters at least), and smaller text fields for links. The most basic output can, for one, be a list of everything submitted with digital fingerprints (sha512 "hash") that can always verify the validity of the data. (Note that a URL by its nature is a fingerprint--unique.) Then the other part of the basic output can be a list of all the documents for download (and thus the documents themselves). A very early goal will be to encourage the relevant followers to download it early and often. A next step would be to automate distribution / backups / redundancy.

October 26 - first entry around 4:30am, 2nd - 3rd after 18:40

dripping sarcasm version of response to ad

Speaking of "sales guy" apprentice, we're going to try something new. Several months ago I would have said that being King of the Night Owls is my biggest impediment to getting sales / gigs / jobs, and that's still true. A close second, though, is that I think the phone is only useful on certain occassions. Everyone else on earth wants to talk, talk, talk. And video calls are far worse than the phone.

I am starting lose track of how many Craig's List ads I've responded to over the last several months where the response is some variant of "call me." I've known for a very long time that I'm not cut out for the sales aspect of freelancing, so "sales guy" apprentice is going to try replying to this particular ad. I perhaps shouldn't be replying to anything. This guy in particular pisses me off just by reading part of his ad, so I shouldn't be contacting him. The standard shouldn't be whether I want to deal with them. The standard should be whether sales guy wants to deal with them. So I'm going to write the dripping sarcasm version, and he's going to be polite.

the (dripping sarcasm) response itself

His business is totally non-technical, but he specifies the type of database. Years ago I found out the hard way what the phase "knows just enough to be dangerous" means. My red flag is up when non-technical people are trying to dictate such things. If he is technical, he needs to specify that. Where the bleep does he get these technical buzzwords and wants to dictate how it's done?

Sales guy says I should explain my choice, though. Fair enough. Controlling potential client wants a SQL database. I started working with SQL (relational / RDBMS) databases in 1997. I started hearing about MongoDB / OO / object-oriented databases in roughly 2016. About the same time, I saw how RDBs could be mangled with Drupal's "field collection." It's like a mockery of an RDB. It was probably the field collection that sent me towards OO heresy. Also, in Mongo, you don't have to explicitly create tables or keep them consistent. You just chuck data into a "collection" (table) and be done with it. Yes, that has the theoretical potential of being a mess, and occasionally I've burned myself by not keeping my data types strict, but that's been a minor concern versus the freedom of just digging in immediately. Whenever possible, I have been using MongoDB since early 2017, and I'm converting my client's Drupal system to all MongoDB. For years I've had it running with both databases.

Similarly, he says he'll buy the hosting, which he should, but the implication is that he'll choose which host. Obviously he should get input, but that's more my choice than his. Given that he didn't specify a host, the question is whether he is flexible on this, or whether he has a specific host in mind. If so, what?

PHP and JavaScript is fine, but I should defend other developers to say that that is also not his place. I started learning PHP and JavaScript, though, in 2005 during my last semester before I got my BS in CS.

I've done at lest 5 credit card gateway systems. At least 3 are (still) online. I'll send those links to sales guy.

Of course he gets the source code. That is a very reasonable request. The code will be on the server he owns, for one. Neither PHP nor JavaScript are complied, so that is the source code. Also, I can put the code in a private GitHub repository (repo), and he can get an account or I can otherwise figure out how to get that to him.

Now that I've ranted a bit, I could almost respond myself. But, again, I probably should not be. Here is the attempt / draft, though:

closer to a real response

I started working on SQL databases in 1997, and I'm still working on them. I started learning PHP and JavaScript in 2005 while finishing my BS in computer science. For almost seven years, but always part-time, I have been adding to a PHP-JavaScript-MySQL application for a lawyer. It helps automate his billing, calendar tracking, client court date notifications, and client snail letters. Paying my for the system is less expensive than hiring a human assistant. My system is a literal digital assistant.

I wrote a payment gateway for my lawyer client a few months ago, and I've written or worked on 3 - 4 others over the years.

You said you'll purchase the hosting. Do you have a host in mind? I have definite preferences. It was unclear whether you do, too.

Neither PHP nor JavaScript are compiled languages, which means the running code on your host is the source code. We can also use a GitHub repository, and you can get a GitHub account. Yes, I agree you should have the source code.

apprentice / gig hunt update

This is in part to an apprentice whom I met during the summer (late spring?) of 2022, and he was quasi-active for several weeks. We talked on the phone and did screen share and stuff several times. I got into Q&A / lecture mode, and we went over a lot of stuff. That was fun for both of us, but I need to be careful about delaying the theory and getting apprentices using the client-side JavaScript and / or server-side debuggers. Then we need to find something for them to do, even if it takes them months. Otherwise put, I didn't quite get him to the point where he had "marching orders." Then I got distracted. This is my account of the distraction.

A few months ago "sales guy" apprentice became active again.

[Looks like this will get postponed for a while, although I continue above to some degree.]

AWS recruiter ~13 - posted around 4:30am

At 2:25am I received roughly my 13th email from an AWS recruiter since May. This isn't the first one in the middle of the night, but the others were coming from someone in the Middle East. This one is probably from the West Coast, if it's not from a robot. I'm tempted to write back just to make my point about the King of the Night Owls. Then again, I'm also tempted to rant and rave, so I should do that here.

It took me until very recently to be very sure that they don't give their 105 minute test in PHP. They give it in Ruby, for God's sake! But not PHP. I've barely heard of Scala, but you can take it in that!

For sake of argument, let's say that they did it in PHP. I would still be very hesitant to do it for the foreseeable future. My gig hunting has been disastrous lately. I haven't gotten a new project since March or April. I don't think I can summon the peace of mind right now. Also, I'm not coding 8 hours a day. Even in PHP I would probably run through some drills for days and days in advance. And I shouldn't be spending that sort of time optimizing for an unrealistic scenario. If I am absolutely forced to code something in 105 minutes, I don't think I want that job.

Out of the options, JavaScript is easily my second choice. There are a number of reasons to move my operations to JavaScript anyhow, but taking Amazon's test should not be a reason. One thing I could look into is to what degree I am doing abstract coding versus having to do things like open files and make web queries and such. I took the practice test for Hired.com in JS, and that was abstract. Then again, I just can't justify spending time on researching in order to take a test that only has a certain chance of going well. I would have to do very well, too. None of this matters unless I can negotiate a very flexible and / or night schedule. I have to nail it to have a negotiating position.

I think they're saying their $160,000 minimum salary has gone up. Yeeha! I would be fairly happy with an additional $6,000 in income right now. Every $1,000 beyond that would make me more and more happy. I might achieve ecstasy between $50,000 - $80,000. This is an argument for trying again to find network monitoring jobs or some such, something at a high school diploma level.

I decided to email him:

Hi Stephen,

You're roughly the 13th AWS recruiter since May. I've replied to 5 - 7, and 3 - 4 have responded. The results so far have been discouraging. Perhaps you'll have a solution, though.

My biggest "career" challenge goes to the 4:30am-ish timestamp of this email. I am a serious night owl, and I will not work 9 - 5 except for emergencies. A previous recruiter gave me some hope that if I ace your 105-minute test, I can probably negotiate what I want, but I figure I have to ace it more so than day people.

AWS interviewed me in 2010, so this isn't once in a lifetime, but I assume this test will be my last chance in this lifetime. I am not going to take that test until I am good and ready, but at this rate that's looking like never (more below). Is there something else I could do at night while I get into the proper frame of mind to take the test? (Obviously AWS is a 24/7 operation.) I don't need to do development. For that matter, if I found night work, I'd probably just stay in that path--become a network engineer, for example.

Another thought: I spend an enormous amount of time on my own projects on GitHub. I would hope that quite a few people would hire me just from my GitHub. I won't even put that URL in a first email (due to spam filters), but my GitHub user is kwynncom . Perhaps you can find one of those people who will hire me from that?

So those are my two main points / hopes: 1. Is there something else I can do at night? 2. Will my GitHub get me anywhere?

With that said, if you're still reading, some more background might be helpful:

I have been freelancing to try to solve the night owl problem, but I don't have the people / sales skills for that. Sales have been going very, very badly lately. I would be relatively happy with $15,000 a year for a few months. I would reach ecstasy between $50,000 - $80,000. Again, I don't need to do development.

For some extra feedback, PHP not being one of the test languages is part of the problem. I'm perhaps 70% as fluent in client-side JavaScript, but I've done very little (server-side) Node.js. I would have to do some research on your test to determine to what degree I need the server-side, but that's the sort of thing I have trouble justifying spending time on.

My freelancing difficulties also mean that I'm not coding 8 hours a day. I'm sure I'd reach fluency plenty quickly enough to be worth hiring, but, again, I'm not pushing that button until I'm good and ready, which makes for a nice catch-22.

Hopefully that helps.

Kwynn

October 17 - updating various software - checklist

check web form / email
test that the new database email audit feature works
JS utils - stdio - not entirely sure what to check
get_oids()
*****
SNTP
make sure /var/kwynn has www permission along with all the files
make sure current version of wrapper, runner, and base
check cron
make sure FIFO mode
make sure no runaways
check quota DB records
perhaps put up sign that chm is likely to break

	

October 13 - apprentice timekeeping assignment

Short version: assignment 1: compare your computer's time to Time.gov and my own clock.

To give the context going back roughly 3 years, I asked myself the simple question of "What time is it?" By which I meant how does one obtain the time from accurate timeservers in a way that is "programmatically" useful. I'll come back to "programmatically" in a few moments. Time.gov is very useful to an end-user setting various (wall) clocks. Some of my frustrations with time.gov, though, led me to creating my own clock that I assert is better than time.gov assuming kwynn.com is accurate, which I'll get to in a moment. (Why it's better is a discussion for later.)

My clock web application is only as accurate as kwynn.com. How do I know kwynn.com is accurate? That has caused a storm of coding at various points, all of which is in my GitHub in several repos.

Part of the answer is to install an NTP client. (Look up NTP, but I'll get to the specifics in a moment.) Or at least that's an answer that doesn't involve purchasing some inexpensive hardware and doing some fiddling with it. I can't install hardware on kwynn.com, though. It lives a few hundred miles away. Also, NTP is plenty good enough for my purposes. "My purposes" are to be accurate to something like 0.4ms at worst. Right now, and usually, kwynn.com seems convincingly within 0.1ms.

So, for a first (sub)assignment, compare your computer's time to my clock and / or time.gov. I've rarely tested a desktop that's not well-synced. Right now, from my clock, this computer I'm typing on shows 1ms off. My clock application runs on JavaScript, which only keeps time to a precision of 1ms, so it's probably only 0.5 or 0.6ms off, and that's rounded to 1. Or maybe I used ceil() rather than round()?? For a comparison, my cell phone shows 170ms off, from my clock web app. Cell phones are synced from the tower to some degree, but not a high degree, or they would never be that far off. I'm not sure what to expect from an un-synced desktop. You'll tell me.

I never got to "programmatically" in this lesson. Perhaps next time.

October 3 - "What is your specialty?"

pass 1

I'll try answer that like a shod muggle, then I'll gripe about the question (or maybe not this time).

I started doing relational (SQL) databases in 1997, "did them" rather intensively for a few years, and still grudgingly use them. I have drunk the MongoDB / noSQL / OODB Kool-Aid, though. I've moved all of my personal operations to MongoDB and am moving my one active client's.

I've been doing web application development with some degree of consistency for 11 years, so that had better be a specialty. I am not a visual artist, though, so I don't necessarily have the vision for a pretty, mass-consumption site. If I am given the vision, I can probably implement it.

I've mostly done websites in PHP and client-side JavaScript, but I'm picking up Node.js quickly enough.

C was one of my first languages, and the PHP language is written in C and implement much of the low-level C stuff. Thus, I have a background in low-level system programming.

pass 2

Specialties - Relational (SQL) databases; object-oriented, noSQL databases (MongoDB). Web application development. Client-side JavaScript, PHP. My C is slightly rusty but getting its shine back recently. I am very slowly starting to use Node.js. I've done Python and Java for billable work before.

More broadly, I have succeeded at quite a few technologies I'd never done before. I've written an NFT contract, created a Cardano Ada public stake pool, written a USB device driver, and have written two browser extensions. I often have more fun doing things I haven't done before.

October 2

AWS fixed their clock by yesterday evening. My latest chrony.conf has this:

server 169.254.169.123		minpoll 2 maxpoll  9 iburst xleave
server 129.6.15.29			minpoll 4 maxpoll 10 xleave
server 2610:20:6f15:15::26	minpoll 6 maxpoll  9 xleave
server 2610:20:6f15:15::27	minpoll 5 maxpoll  9 xleave

I found that if the .29 NIST server had a minpoll of 2, it would "take over" and become the reference time. I want the NIST servers as a backup, now that I can't trust AWS anymore, but I want to give most of the work to the AWS server.

I also updated my system status email notifier to keep track of time in addition to disk space, CPU usage, net usage, Ubuntu updates needed, etc.

"What is your specialty?"

Moved to a future day's entry.

September 30 - October 1 (started 9/30 02:21, continuing 20:27, first posted around 9/30 22:01) - continuing 10/1

update - 10/1 05:00, then 05:40

In my GitHub "code-fragments" I have taken my own advice just below.

In code-fragments is a search on 60 ntp servers. I found 2 - 3 that were around 1ms round trip, but it seems they took the AWS timeserver Kool-Aid and are 3 - 4ms off. The next step would be to keep a database of addresses and see how quickly the pools change.

I didn't realize until I started coding it that all the pools are ntp.org. I think there is a way to search those servers. We need better than "Amazon." We need regions such as us-east-1.

So the assignment I made to one of my apprentices may be the best way to go--actually "talk" to people on the AWS forums. It is of course frustrating to me to even contemplate such a solution as dealing with humans.

update - 10/1 04:12

Perhaps obviously I couldn't leave this alone for a few hours.

I'm thinking that the technical solution is to start by running "nslookup" on 0.us.pool.ntp.org ... 3.us.pool.ntp.org and then do the same for 0.ubuntu.pool.ntp.org ... 3... and then 0.amazon.pool.ntp.org ... 3. So far, oddly, the Amazon pool doesn't seem to have anything in us-east-1. (Also see what other pools there might be.)

nslookup will give you 4 - 5 addresses for each. Then run sntp from Amazon. I probably need a server with a 250µs round trip because that's what I get from AWS. I found an Ubuntu pool member who was closer than NIST, but it's not quite as accurate (of course). The Ubuntu member was 2ms round trip when NIST is 4.

In any event, the goal would be to "automate" the nslookups and sntp test the servers. If we don't get a hit in the first round, we'd see how quickly the lookup results change. I have no idea, so far, how quickly that is.

Also, after endless hours of writing my sntp clients, I finally discovered "sudo apt install sntp" I haven't explored all the options, but I might have wound up writing my own anyhow. Depending on the options, the pre-existing one might not do what we want, because I'm not sure it shows the round trip time. When querying, I got several < 1 ms hits, but then the round trip was 10s of ms.

intro

An unprecedented event! A great pillar of stability has fallen! There is confusion in the land! Now that our leader has fallen, we must band together--coordinate in order to coordinate time! We must resync!

About 2 days ago (late 9/28 into 9/29) I discovered the hard way that Amazon Web Services' NTP (time) server in the Amazon us-east-1 region (northern Virginia) had essentially broken. It's been about 4.2ms slow for that long. That's "broken" by usual standards. It's broken enough that I've worked around it.

apprentice assignment

The potential apprentice assignment goes something like this:

  • find out if any of the lists of NTP servers specify us-east-1. My results so far are "no," although there may be some ways of deriving the information.
  • Search the AWS forums / the general web and see if anyone else is complaining or has even noticed.
  • Inform the forums of the problem (optionally after researching it so you can speak to it yourself). See if anyone paying for AWS support or just AWS' people reading the forums will get some action on this.
  • Try to collect a handful of people on us-east-1 who will sync with the NIST Maryland servers and then we can help each other stay synced. In other words, we shouldn't fully trust AWS ever again. One other person whose server runs 24/7 would be a good start. I just want someone upon whom I can use the chrony / NTP "iburst" option (mentioned further below).
  • This is all still valid, but see the 10/1 4am note above

If you reproduce the problem, this is at least Linux sysadmin experience. Programming experience is certainly possible.

Below is my first entry from early this morning (9/30), where "early" means 2 - 3am.

For at least the last day or so, the NTP server(s) for AWS EC2 us-east-1 have been about 4ms slow. I actually have some trouble believing it, but I have verification; I'll come back to that. I am reasonably sure that some systems would be having problems with that much of a difference. I would think someone paying for AWS support would be howling by now.

I verified the problem in us-east-1a, 1b, and 1e.

Notes to self on working AWS: Auto assigning a v4 IP address is at the subnet level. The availability zone (AZ) is also there. It seems that at subnet creation I have to enable IPv6. I'm going to try with AZ -1e now. I didn't know that e and f existed. They must be a year or two old at most. (This was written before the above.)

My own various SNTP clients tell me that us-east-1 is 4ms off. Also, when I configured chrony with both the AWS NTP server and the Maryland NIST servers, I could see the conflict. I think the conflict even "broke" chrony, perhaps because I did not select a preferred server. I could see the different results in the chrony log, and chryonc tracking failed due to the failure to sync. (Upon further review, this probably only happens from roughly 30 seconds after launching chrony to 90 seconds.)

Check the 2 subnet boxes, "Enable auto-assign IPv6 address" and "Enable auto-assign public IPv4 address." Those settings are then confirmed / changeable at instance launch.

Same problem with -1e. Seems all of -1 syncs with the same server. Very weird. As I've nattered about at some length, US East 1 is in northern Virginia and less than 2 network ms one way to the NIST servers in Maryland. It's about 30 miles, as I remember.

Roughly 27 hours ago I was working on my page that shows my server's time "readings." I noticed the 4ms consistent difference. Within a short period I had given up on the AWS server and synced directly with NIST.

NIST says never poll more than once in 4 seconds. I originally synced to all 7 Maryland NIST servers (that don't require encryption) and set the polling way, way high such that I wouldn't be anywhere near that limit. That led to some very odd results. I would expect chrony to phase the polling, but it doesn't. The polls would all fire at once, and the chrony log was just weird to watch. I'm sure it was syncing just fine, but it was way, way overkill.

I decided to sync to 3 servers so there is always a majority in case of disagreement. I randomly picked 3 servers. I realized I really needed to set one of them to a min poll of 4 seconds so that the initial sync happens in reasonable time. I could use "iburst" (initial burst of polls upon start) with AWS but, off hand, I saw no such invitation at NIST. It seemed that the sync took a while with a high min poll. I may play with that a bit, in fact.

In any event, I wound up with this on kwynn.com:

server 129.6.15.29		minpoll 2 maxpoll 13 xleave
server 2610:20:6f15:15::26	minpoll 4 maxpoll 13 xleave
server 2610:20:6f15:15::27	minpoll 5 maxpoll 13 xleave

The minpoll of 2 means 2^2 === 4. I find "min" and "max" confusing in that it seems the terms should be opposite. Those are the 3 NIST servers. I've found that the polling doesn't get anywhere near 2^13 seconds. The logs show that the polls are every 13 minutes at "worst," and my offset (error) times are on the order of 50µs, so that polling is just fine. (Revision: it does go towards 2^13, so I'll reduce it later.

I started up a new instance to play with the 2^2 versus higher settings, and I got an sntp "ping" of 2ns to the NIST server! A ping is what I call the measured error between my system and the server, based on an SNTP time request. I wonder if I'll ever see that again. I often see 7µs but rarely in the ns range.

It's like a 4 leaf clover. I'll enter it into the record:

$ php s.php
1664524061841247666
1664524061843091158
1664524061843092294
1664524061844935789
2610:20:6f15:15::26
0
1,843,492
1,844,628
3,688,123
-2

Perhaps I'll explain all that some other time. It's in my SNTP GitHub and such, to a large degree. Well, it's there to a total degree in that the code is there. My readings to AWS are around 4,000,000 (4ms) rather than 2 (2ns).

If I set 3 servers to a minpoll of 5, it takes the 3rd poll at 64 seconds to sync. If I set the 7 servers to the same, it still takes 64 seconds.

I am also confirming that the AWS server "breaks" chrony. If I put one NIST server and AWS on 2 seconds maxpoll, it takes 64 seconds to sync, when the other servers break the deadlock. If I keep that that the same but set the AWS server to "iburst," then chrony syncs very quickly and then loses sync. Then it gains it in 64 seconds. Then it can't increase its polling interval, or it keeps going up and down. I can see the 4ms error versus 2 - 3 orders of magnitude better for NIST. That is base 10 orders, not 2. It looks like in this case chrony throws out AWS and starts to increase poll interval. I'll try to reproduce "breaking" chrony.

Now I have AWS on iburst and default polling intervals--not explicitly set. I set 3 NIST servers to maxpoll of 5. AWS does the initial sync, then NIST takes over, probably after 64s, although I didn't notice it for sure. I restarted chrony, and for now I see that the NIST servers are off by 3ms. Then the NIST servers "take over."

Maybe the break I saw was very brief. In any event, it seems definitive that the AWS server is around 4ms off. 4.2 seems the most common number. The alternative is that 2 NIST servers are broken. There is a thought. I'll try the settings as above except for 3 "new" servers. It takes 3 cycles--96 seconds--for NIST to "take over."

Even if I set AWS to "prefer," NIST "takes over" in 2 - 3 X 32s cycles.

If I set AWS to the default polls and the generic time.nist.gov to maxpoll of 5, the NIST load is "balanced." In this case, chrony got latched on to one of the Colorado servers. Then chrony is looking at a long net time, it can't compensate, and AWS wins.

Some digging shows 4 Amazon servers 0.amazon.pool.ntp.org though 3... The ping times to all are awful, though, for my purposes.

September 26

SNTP client, daemon version, continued

The evidence is that the daemon version gets more accurate results than the run-each-time version. I ran some tests that are posted in GitHub.

September 24 (posting 9/26 or later)

"Unnecessary Closing Delimiter"

That is right up there with Edna Mode's "No capes!"

Where was your advice, oh NetBeans, when that might have gotten me an extra $50k in income?! Seriously!

In roughly January I was ranting about this. PHP was written specifically to go in between PHP and HTML context. You open and close PHP in a similar way to an HTML tag. Years ago, if you closed the PHP tag unnecessarily, the thread of execution went back into HTML mode, and if you had one single space between the PHP closing tag and the start of your output, you got the horrific, dreaded, devastating, career limiting, depression-inducing error that cannot be named (like "Voldemort"). Actually, I'm having trouble naming it because, thankfully, I can't reproduce it right now. Big Evil Goo is inconclusive. I thought it was "...cannot be changed after headers have already been sent," but it might be "End of script output before headers." It would appear that after several suicides or perhaps mass murders, PHP and / or Apache made themselves more tolerant of such things.

Part of the problem was that it would not happen on my local system and did in production. That's why I encoded my rule #2, phrased to the effect of "Your local dev system should be as similar as possible to production." This was probably because I was using a more recent PHP.

Although the PHP doc still says:

Remember that header() must be called before any actual output is sent, either by normal HTML tags, blank lines in a file, or from PHP. It is a very common error to read code with include, or require, functions, or another file access function, and have spaces or empty lines that are output before header() is called. The same problem exists when using a single PHP/HTML file. [bold mine]

It's likely I wasn't using NetBeans at the time, in violation of what became rule #1: "Always dev with a debugger." Also, NetBeans has a tendency to cry wolf. I ignore all of its other advice. I happened to notice that message above, though, as I was working on my main-but-always-part-time project. (I still need more work.)

As for the particular instance, I think I will ignore NetBeans' advice. I was getting that bug in roughly 2011. I know a lot more now. I am in no danger of suffering that problem in this context. For one, I am well within the body of HTML and nowhere near the headers, or well past the headers.

That error may have literally cost me $10,000s because it came up in 2 different projects within a few months of each other. In one case I spent hours working around it, which was actually seriously damaging the code in that it was utterly unnecessary. I just could not figure out the error. It did not damage the code in that I tested it, but it was creating needless clutter and weirdness. Then it came up again in an instance where I was already getting irritated at my contractee. That was one of several straws that became final. Especially the second project might have gone on for years and years.

the latest SNTP client reboot

Yesterday morning (as in 1am) I was tired of (so-far unsuccessful) sales stuff and got enthused enough to do some fun coding. A few times in the last week or so I rebooted my system and was watching NTP sync. I got to thinking about the NIST pinger and its usage of my C SNTP client. I wondered if the client as a daemon would get better results because it lived in memory and was already warmed up.

"Daemon" is a technical term that I had some exposure to in 1991. I now frown upon it because now I have reasonable evidence that demons are quite real and worshiped by the "elite"; demons are not at all funny anymore. But, with that said, I won't abandon the term.

In any event, the daemon version is close enough to done. I'm not sure I'll make it live, though. Maybe I'll set an apprentice to the task of comparing them to see if the daemon version does get better results.

September 17 - 19

17th - 19th (01:12) - bleeder apprentice respondent

Weeks I ranted about one of my apprentice respondents. This is a continuation. I wrote this entry in at least 2 parts, over 24 hours apart. At one point my blood pressure was somewhat up, but it was as much a feeling in my stomach. I had a bad reaction to this. I will recap.

When this guy wrote the first time, he was quite eloquent as he talked about himself. I count the word ratios below. Then, as I spent a lot of time trying to interpret some reasonable ambiguity, he got more and more upset that I wasn't reading his mind, and his word count went way down. His timestamps were all within banker's hours, let alone business hours. I don't think I realized until this incident just how much that irritates me. Anyone who writes repeatedly during business hours should have the world as their oyster. If one is both starting to massively waste my time and is doing so during business hours, I need to learn to disengage.

Looked at another way, I was caught in my own trap. I will give people who can write well massive credit, but I also need to see how they write in the back and forth.

The following was written over a day ago but not posted. I tend to think I should post the ranting. It does make me feel better.

I'm going to rant some more about that apprentice respondent from weeks ago. He came up in my mind because it's been 2 months since my previous Gab apprentice ad, and I've tentatively established that I should re-post every two months. Thus, I am writing a new draft of my apprentice ad, and I started thinking about him. I just read part of the exchange again, and his responses almost seemed specifically tailored to annoy the crap out of me. [Written a day later: ] It's far worse than annoy, actually. One way I've stated my apprentice search is that I've pushed away everyone in my life who is not helping me find business. I am so desparate for someone heading vaguely the same direction that I set myself up for this sort of thing.

At a glance, it appears I didn't record the final chapter. His final response was "Obviously, you haven’t been reading my emails. I'm not going to constantly email back and forth. Please remove me from your email. / Sent from my iPhone."

HOW DARE HE ACCUSE ME OF NOT READING HIS EMAILS!!!! I was responding specifically to him at roughly a 25:1 word ratio. His first email in which he talked about himself a lot was 250 words and 1,333 characters with spaces. Then there were three more emails from him that added up to 75 words and 370 characters. So perhaps that's both narcissism and some degree of autism that makes it impossible for him to understand someone else's understanding of what he's saying.

To cap it off, his 4 timestamps were 11:17 AM, 12:34 PM, 3:59 PM, and 2:12 PM. None of those were in my specified period. Perhaps this is something I need to be more aware of in myself. In one sense, he wrote fairly well. If he were responding during my timeframe, I probably would have gotten on the phone with him and worked it out. Perhaps I need to be aware of just how resentful I am towards anyone who is both emailing at those times and wants my help. Early on, I should have asked him about his availability later in the day.

Perhaps I should write my response to him and maybe even send it. He wanted to work on backend stuff. I asked him if he had ideas on what he wanted to work on or whether he wanted mine. I warned him that he if he wanted mine, he'd have to specifically ask because I could list projects for hours. Then there was another back and forth. Perhaps I'll find it satisfying to write the flipping list.

back-end-only (sub)projects

From my GitHub only, in most recently changed order to oldest. I'm specifying how much is backend, or at least a rough estimate. Part of the problem is that I decided to indulge him in his specification of backend. That was probably a mistake.

examples -> Alpaca, general-utils -> base62, boot, email, fork, inonebuf, isKwGoo, kwshort, much of kwutils, lock, machineID, mongodbX4, web-server-access-log... - about 75% - 85% of it, code-fragments - God only knows, simple-time-server - all of it, aws-ec2 - roughly 50%, chrony - 50%, random-number - 20%, nano-php - all of it, astronomy - 50%, nft - all, positive-gmail-check - 30%, memverse-nonweb - all of it, that's why it's "non-web"; generic-php-login - 20%, running-sysadmin-examples - all of it, in that sysadmin is sort of by definition back end, but that could be argued; sysadmin-email-alerts - all, sntp-web-display - 50%, ubuntu-update-web-checker - 60%, tour-calendar - 80%, sntp-client - all, transmission - 60%, scripts-in-use - all, github-activity - 70% true-random - all, readable-primary-key - all, flightgear - all, newconstructs - all

> Am I wasting my time or are we going to do some work? This back and forth emailing is getting old.

As opposed to real-time com? At what time of day? My ad said "A big plus is if you are sometimes available during a window from roughly 7pm – 2am (US) Eastern." Did you read my ad? "Do some work"??? You think you're going to be valuable to me doing actual paid work? You have given me zero reason to spend time on you! You've already wasted an enormous amount of my time. I'm the one out-writing you at a 25X ratio.

I specifically said that I don't want to spend time listing what I just listed above, that you could have looked up yourself. And my key response to you was that all of the above required Linux, so go install it. How is that unresponsive?

Boy, I am really pissed. Part of the issue is his timestamps. Anyone writing emails at those times should have the world as their oyster. Why should I help him?

September 7 - Mr. Zoom II (updated after 20:46)

First posted roughly 19:30. My next reply after 20:46.

I had an encounter with Mr. Zoom II--probably not the genetic son of Mr. Zoom I, but conceptually related. Looks like I first mentioned Mr. Zoom [I] on May 30 and updated that on July 4, but your search of this page is as good as mine. May 30 seems later than I remember.

Below is our exchange. I am removing one technical keyword to make it a bit harder to find him, and because the word isn't relevant.

My blood pressure went up, and I started to blow my stack. Then I decided that his reply was well-written, so I tried a compromise.

His ad:

I am looking for someone to help with a ... project I am working on. The gig has potential of becoming a full time paid gig. If you are interested please let me know and I can give you the details of the gig and what is entailed with the project. I would keep at it on my own but other project keep pulling me away and I don't want to get behind on this project.

My response #1

Alternating between general and specific, I started programming decades ago when I was very young, so I'm not particularly old yet. I've been doing it professionally, on and off, for decades as well. I have a BS in CS, for whatever that is worth. Years ago I dev'ed for 4 years for 2,000-employee software companies. Lately I have been writing a web application for a lawyer. That project has been going for six years, but it's always been part time. I have done zillions of lines of client-side JavaScript. I've only written a few hundred of Node, but that's been as recently as 14 hours ago, so I'm getting better at it.

If I were a morning person, I could work wherever I want. You are my first email of the day, though. I am a serious night owl. I try to solve this challenge by freelancing, with mixed results.

Now hopefully I can admit that I haven't done ... specifically. I've used a decent bit of jQuery. I tried Angular briefly but ran into issues that led me to conclude that I could write faster without it. I've heard good things about ..., and it's been on my list to look into. So perhaps this is my chance.

his reply 1

Hi kwynn, I am still looking for some help. I would need to do zoom meeting so I can meet you. If your still interested. [signed]

my reply 2

Hi ...,

Thank you for answering. My response ratio is getting really bad. At some point I have to wonder if CL still works. On one hand, you give me hope. On the other hand:

To your email, I said I'm not that old, but I'm older than the snap to video call generation. I am a foaming at the mouth open source fanatic. I have not installed Zoom, and I hope to go through life without ever doing so. (As best I can tell, it has to be installed as opposed to a web service, but I'm not certain of that.) Actually, the open source reason is secondary. The primary reason is another topic. I'll be happy to get into it if you want to know, but I'm starting to fear that our word ratio is getting too high. Along those lines, I'm simply going to ask if we can explore alternatives to a Zoom call?

Kwynn

his reply 2

Unfortunately I won't hire anyone who does not do a Zoom call. I need to be able to see their face and know who they say they are. Plus during jobs I have zoom meetings at least once a week to catch up and see how projects are going.

Good luck in your pursuit of what your looking for.

my reply 3

Hi ...,

Perhaps I need to give ground in some areas. You won't hire without Zoom: understood. May we go back and forth by email a bit as I decide what I need in order to make Zoom worth it? You have your Zoom criteria; I have criteria, too. You did answer me. You're going to have great difficulty finding better developers without hiring recruiters, and you might not even then. So may we compromise and take a few steps before Zoom?>

> I need to be able to ... know [they are] who they say they are.

I can think of foolproof ways to accomplish that. Would you like suggestions on that point?

his reply 3

Actually I have some interviews with some current developers via zoom. Zoom, Skype, etc are very popular interview avenues almost everyone of uses at least one or a combination of them.

Look Kwynn I appreciate the interest but I'll have to keep looking for the right fit.

my reply 4

Skype is of course worse than Zoom because it's SatanSoft, but that wasn't my point. My reply 4:

> Skype

You didn't mention Skype. You seemed intent on Zoom, so I was assessing what I needed to make installing it worth it.

> Actually I have some interviews with some current developers ... are very popular interview avenues almost everyone of uses at least one

I've done such video interviews, gotten the work, made the customer happy, and got paid. I didn't say you couldn't find other developers to jump right on a call. I implied that I'm very likely better than they are. I have a number of reasons to believe I am in the top few percent of devs who get a very high X factor more work done. It might be worth a temporary compromise so we're both happy. I didn't do video calls until they'd passed my criteria. It might take 100 words in answer to my questions.

I'm not going to start asking the questions because I'm 40% sure that you're going to say something to the effect of "I won't deal with anyone who isn't jumping up and down to do a video interview with almost zero context going in." But I want to make sure, or hopefully be wrong.

commentary

I originally said that maybe I can save my scathing commentary for later. My commentary on Mr. Zoom I covers it pretty well. Now I'll hope there is no need for scatching commentary; now that I've answered again.

September 5

some Node.js dev and AWS Lambda

My latest GitHub entry is Node.js code that (mostly) runs in AWS Lambda. The code is based on my observation that my widget to check my "simple" timeserver does not check firewall issues around the relevant ports. My website is calling the code through internal rather than external IP addresses, so the AWS firewall is not at issue / cannot be tested.

Lambda lets you create just a function and call it without setting up a whole server. Thus, it's external to my site and tests the firewall. That is, if I ever got it working fully.

One problem I had was around JavaScript (server-side Node) "promises." I got on a tail-chasing expedition that even I am prone to at 4am. When I took a breath and tried again, the promise was much, much simpler, and I got that working fine.

I'm not sure I'll pursue Lambda further for this purpose. Lambda had the "net" library but not the "dgram" (UDP) library. So I'd have to add that. That's not a big deal, but it's just the sort of annoying thing that comes up when you're not using a real server. Also, Lambda wasn't processing TCP IPv6 addresses, when the same code worked locally. Thus Kwynn's Dev Rule #2, one of my first entries way, way below. Rule 2 says that your cloud setup should be as similar as possible to your local machine. The rule prevents situations such as this.

There is probably a simple solution to the TCP IPv6 thing, but, again, I may not chase it. While I'm at it, some minor "gotchas" on Lambda: you may have to "File - Save" within the Lambda web editor, and you definitely have to "Deploy." If you don't "deploy," you hopefully get a message that it's running the old code. You might want to use manual version output or even auto version output such as the (partial) md5 of the file itself. Deploying can take roughly 3 seconds. There is an indicator, but I didn't see it for a while.

Do NOT put "process.exit();" in your Lambda code. The individual invocations to the code are not nearly as individual as you'd think. Thus, killing the process after a timeout will very likely kill the next invocation. I had my code set to kill after 3 seconds, so about half my invocations were failing when I hammered on it to test.

If you define a variable globally, I am almost certain it will persist invoke to invoke. You have to add some simple code to get Lam to invoke. It's in my example and in their doc. You can tell whether you're running Lam based on a number of environment variables, also in my code and their directions.

AWS billing is often delayed by very roughly 30 - 180 minutes. I've never nailed it down. My messing around cost nothing. I would probably have had to mess around for hours and hours before it charged 1 cent. The invocations with errors took about 150ms, which is part of the billing measure. The invocations took about 50 MB RAM because Node is interpreted, so that's the size of the interpreter more so than my code. It cost me nothing, but there is a $0 entry in my bill showing all of my activity. If I'm reading it right, "Compute Free Tier [is up to] 400,000 GB-Seconds." I used 2.615 seconds after running my script 195 times. The TCP IPv4 only version ran in about 12ms. Their term is "request" rather than "invocation." 1,000,000 requests is the free teir.

August 28 - email server, IMAP, etc.

This is part of my gig application for a Craig's List gig / "temp jobs" / "computer temp jobs."

To take your ad literally from the top and start with Linux, I am typing on Ubuntu Linux, and Kwynn.com runs on an Ubuntu Linux VPS. My /t directory tree is evidence that this site is over 11 years ago. A whois lookup on my IP address shows AWS, and thus this sever is a Linux VPS.

I am still on the front page of Big Evil Goo for the search term "Python IMAP IDLE" without quotes. I'm #7. I've held #6 - 8 for over 7 years. Here is the direct link to my IMAP IDLE.

On the encryption end, here are the steps to do "by hand" a large portion of what TLS does. From memory, I create a long symmetrical key and then encrypt it using "Bob's" public key. I send the payload to Bob encrypted with the symmetrical key and then the sym key encrypted with the public key.

I had a problem with Goo OAUTH working with the PHPMailer class. I started with someone else's solution, but I had to do some hard thought to modify it. I considered posting a sample of this code, and I probably will if it's round two of my application. For now, I decided it would take too much work to make it generic enough for an example.

My GMail "positive email check" is using a Goo PHP library and is therefore somewhat removed from the raw server, but that is just as hard to get working as dealing with the raw server.

Regarding Postfix, a funny story about that. About the same time I wrote that encryption script for a friend, we were discussing running our own mail servers. It turns out that I did get incoming email working, as I explain: I was using port 25 for SMTP, incoming in this case. There was some talk that residential ISPs blocked it, but it wasn't blocked. Then I created an MX DNS record, set up Postfix, and sent email to the domain I used in the DNS record. I think what went wrong is that root received the mail a la 1991 Sun Solaris UNIX. I hadn't sent the email to an existing Linux user, so it went to root. The sudo command won't show you that root has mail. I didn't realize the email receipt worked until I actually did "sudo su" to open a root-user shell. Then I saw the notification a la 1991 at the command prompt. I should have been looking in, as I recall, /var/spool/...

I dealt briefly with SPF. Goo is doing the work, but it shows that I can implement instructions and such. For that matter, I stepped my client through adding the DNS record. I may send the actual domain name in the email; I shouldn't include it here.

$ dig example.com TXT

; ... DiG 9.18.1-1ubuntu1.1-Ubuntu ... example.com TXT
...
;; ANSWER SECTION:
example.com.		300	IN	TXT	"v=spf1 include:_spf.google.com ~all"

;; Query time: 188 msec
;; SERVER: 127.0.0.53#53(127.0.0.53) (UDP)
;; WHEN: Sun Aug 28 21:29:01 EDT 2022
;; MSG SIZE  rcvd: 171     

Regarding DKIM, I know what they are, and I'm sure I could figure it out quickly.

August 24 - the Gab troll

Weeks ago I posted a variant of my apprentice ad to Gab's "Programming, Computing, and Electronics" forum. As you can see, I acquired a troll. I'm not sure I've ever been trolled before. If I have, it's been a long time. I didn't even register "troll" for a while. At first, I thought it had a following. Its header says "1.2k Followers," but when I click, there is no one. I see that inconsistency with other people, so I'm not sure what to make of it. I also see lists of followers with some people, meaning that the following header does work in some or most cases.

Now that I have partially grokked "troll," I'm continuing to engage it because I have evidence of the phrase, "There is no such thing as bad publicity." And that's assuming anyone is taking the troll seriously and thus it's bad publicity. I have a few pieces of evidence that engaging it has kept my apprentice post alive. A few days ago I made contact with my first potential apprentice from Gab. I've also seen Gab HTTP referrers entering my site since the troll engagement.

In addition to keeping my post alive, this is an exercise in engaging a troll while keeping my blood pressure down. As I said a couple of days ago, I have almost no reason to think that my blood pressure will lead to any problem; I haven't taken the clot shot. This is simply an exercise in emotional control.

So, to address the troll's points: This is not a matter of being offended. If I thought anyone took it seriously, I could probably sue it for defamation. I suspect Gab would back me rather than fighting giving me its identity, or leads to its identity.

When contrasting "the internet" with the real world, I suspect it is being deliberately obtuse. My real name is right there, and the troll is potentially damaging my ability to do business. By contrast, the troll is anonymous and thus takes no risk unless I went to the courts.

As I told it or at least implied, anonymous trolls are fine and useful in quite a few contexts. Public figures are fair game. Defamation law distinguishes public and private figures. I am not a public figure. In my post I am not advocating policy or trying to affect anyone other than a potential handful of people who might want to become apprentices.

The troll is only worth so much time. I should boil this down for posting to Gab, though.

the Gab version of this post

This is not a matter of being offended. The troll might want to do some research on defamation law. I don't think anyone takes it seriously, though, so I don't see a point in pursuing that. Now that I have registered the concept of "troll," I am continuing this for a few reasons, 1. To whatever extent the troll is bad publicity, I have evidence of the concept of "There is no such things as bad publicity." I have my first potential apprentice reply since the troll engagement, and I have other bits of evidence that this "discussion," such as it is, is keeping my post alive. 2. It's an exercise in keeping emotional control.

August 23

A chat with Gab potential / probable apprentice...

Reasons to move from PHP to Node.js:

  • end to end client to server JavaScript
  • It's the natural language for and of MongoDB. PHP can be rather painful with Mongo when the queries get complicated, such as the $ symbol conflict.
  • I noticed a few hours ago that AWS Lambda does not natively process PHP. Similarly, I have seen instructions and even tests that have several languages, but now one of them is Node and not PHP.
  • Alpaca had Node and not PHP. The hired.com test quiz had Node and not PHP. Those are two more that come to mind.

A restatement: Kwynn's Rule of Software Development #1: NEVER develop without a debugger. I found that one of the links at least in my apprentice 2 paths page is still useful for showing what a debugger is. That's client-side JavaScript, but the concept is the same in any language. I use NetBeans and php-xdebug for PHP. I held my nose and use the free and open source SatanSoft Visual Studio Code for Node.js. I've used PyCharm for Python. A debugger is something that lets you run the code line by line and see what it's doing, including watching the variables change and the call stack go up and down. You can also set "breakpoints" on specific lines such that the code runs until you "break" (pause) at a given line. Then you can go step by step. You "step into" functions or just "step over" them and see the result on the next line. Or you can "step out" of them. That's the basics. The quotes indicate terms that many debuggers use, if not all of them. I've also used Eclipse for PHP.

I should add that a debugger does not solve your bugs or literally debug them. It's a tool to let you debug them.

August 22

The potential apprentice situation that I was ranting about for the last few days seems to have come to a bad end on his end. I suppose you could could say he withdrew his application for apprenticeship. Perhaps I'll come back to that.

The potential apprentice from Gab is coming along. He can write! He passes my writing tests. His first question today is "Do you mind sharing any details in how you got burnt in previous work?"

Divorced, beheaded, died, divorced, beheaded...

lessons learned

I have written a lot below. I should record some lessons.

  • beware of people who demand the phone too quickly. And / or have an account manager intermediary
  • consider escrow services. Escrow.com, as I recall, only charges 3% and has a plenty good enough rating with BBB.
  • Beware of doing too much work before getting paid. I think I've finally learned this lesson.
  • A number of things will trigger me and get me so worked up that the situation is unsalvageable. There are perhaps a number of things to say about this, but one is that it's a reason why I should not be working alone. I sometimes need just about anyone else's perspective.
  • In some rare instances, I might find a technical intermediary on the client's side: someone whom I would take as an apprentice in other circumstances. This probably would have saved at least one project: there was a specific person who got involved too late, but may have saved the project a few days or a week or two earlier.

Let's make a list of codenames / brief summaries, then perhaps I'll randomize their order and expand.

  • the opposite of the British "peer" telling the truth
  • meltdown over $100 - $200
  • vanishes when I start talking about charging (instance 1 - filthy edition)
  • 8 year deprecated PHP code
  • 7 hour talk / the traveler
  • Mr. disorganized in several ways and bad communication with everyone, not just me
  • doesn't feel needs to pay minimum wage
  • pay me peanuts then Number 1 makes assumptions
  • vanishes when I start talking about charging (instance 2 - laborious edition)

That spans quite a few years, so it's not quite the disaster that it appears, but then again it has been bordering disastrous overall.

Below, several times, I mention having my blood pressure raised. I have not taken any "vaccines" at all in my adult life, so that hasn't damaged my heart. Nor have I been to a doctor in 19 years, and that was when I passed a physical for particular work. My heart is just fine as far as I know. The point is that I don't like having my blood pressure severely raised by clients. I don't like that emotional state. I'm not worried about dying from it.

opposite British peer

Opposite British peer: This was a wretched line of communication. This might still be salvageable to this day if I had the right intermediary. I should have had an intermediary watching the communication early on. I made a conscious decision to solve his problem whether he ever paid for it or not: I started chasing the rabbit, and I caught it! This guy really annoys me on a "political" level, too. I wrote a little bit too much without getting enough feedback. He complained about my long emails. They really weren't that long; he just probably wouldn't get off his effing phone. Again, I needed an intermediary.

meltdown

Meltdown: This guy's technique was to pay whatever I asked for the first payment and then hire or fire me based on what I charged. I gave him a bill he seemed to think was totally outrageous. He also seemed to think that I was going to demand payment. I never got a cent from him. Complicating this matter was that he had already given me a company email address, so it eventually made it easy for me to just stop checking the box and ignore him.

The issue of the corporate box brings up a larger point. I get very irritated at people who do things like demand tax forms when I am not an employee and we have absolutely no guarantee that I'm going to make it to $600. I don't remember what the difficulty with the box was, but it added a complication that I didn't need until there was a point in it. There have been other examples over the years. I think it goes to people trying to get commitment, but I make no claim of committing until we get beyond a certain point. It's a chicken and egg.

Web development alone is a vast field. The meltdown guy melted down over what was probably a question of $100 - $200. I very specifically said that I'd negotiate it. He seemed to think it was something ridiculously simple, and I was scamming him. Yeah, it may have been the sort of thing he did every week or so. I did it every year or so.

He eventually started backing down and realized that he had asked for me to do it in a way that required a bigger learning curve. Like I said, though, it was easy to stop checking the corporate box and be done with him. He'd already raised my blood pressure too high, and it took him too long to back down. I managed to keep calm for a while, and I'm the one who initiated the backdown process, which was quite an accomplishment for me in that such things are difficult--running is easier. But it took him too long to calm down; I'd already given up.

bill fail - filthy edition

I was trying to work with another dev on this. It is very hard for peer-level developers to work together. With apprentices, it's clear who is in charge at the technical level, even if the apprentice is in charge on the business level. If it's not clear, I need help on the business level, to the point of putting you in charge.

The dev misunderstood what I meant and thought he sent the final bill to the client. I can't conceive of how he'd think I would charge that little. I was charging him a sales commission for getting him the client--that's all I was charging for in that bill. Then he refused to clarify the issue and tried to figure out how we'd eventually get enough money out of the client.

I foolishly did a number of hours of work without getting clarity. It took that many hours to achieve "hello world." Then I sent the client the bill just for "hello world." I think he was irritated at the confusion at this point, and he never responded. I was embarrassed enough about the whole thing that I never pushed it, either.

8 year deprecated PHP code

She goes in the category of I got on the phone too quickly. If I had slowed the discussion down, I might have realized she wasn't all there.

If you look for the word "deprecated" I think you'll find my rantings below. Her other developers charged $150 / hour. I was perfectly happy with a fraction of that. Then I found that she was using stuff that was deprecated 8 years before and removed from the language 6 years before. That is insane. I told her that nothing could be done until I fixed that. She said that no one else had complained. What I should have said is that of course they don't complain when they're charging $150 / hour.

The clock of her server was wrong by something like 5 hours. It hadn't been rebooted in almost 3 years. Thus, with all this chaos, her system crashed too often.

I did way, way too much tedious work before billing her. I may have salvaged it, and, come to think of it, the situation might still be salvageable--again, with an intermediary. The final straw, though, was when her demo system crashed.

The $150 dev had the bright idea of updating the demo system with my dev system code without testing it, or he didn't test much. Part of the problem was probably the effing cache system. I tend to think anyone who makes a cache system without a very simple disable mechanism should be shot. He may have tested and it worked because the cache hadn't refreshed yet. Also, given the nature of their system, there was absolutely no excuse for needing a cache. It should all have run instantly.

He updated the demo system JUST BEFORE A DEMO! And thus I get the call because she's freaked and he's unavailable. The fundamental problem was that he needed to refresh 3 repos of code, not one. He incorrectly didn't think they were related. I should have pushed back and demanded it all be in one repo because I could tell it was connected. I probably should have seen that coming, although I didn't expect him to do that a day later. I thought I'd have a few days to collect my arguments on the matter.

Within about 10 minutes, I just about had the repos refreshed, and then I think I was messing with the DAMN CACHE when he finally became available and chimed in. I may have had it working in another 2 minutes, but she had him take over. I thought he knew enough about git to simply reverse what I had done, but he did not. He proceeded to very tediously manually undo it. This is one of a handful of instances where I assumed the other guy knew more.

So I was being intentionally or idiotically sabotaged by another developer. Then I was called with emergencies and THEN overridden by someone who did not at all know more. This once again raised my blood pressure, which I probably correctly simply walk away from.

To make things worse, she had some live systems running on the demo machine. It was a mess.

7 hour talk

I got on the phone too quickly. And I talked for 7 hours, which gave me the mistaken impression that we were communicating in a relevant manner. To add to my irritation, this guy popped back on CL AGAIN recently. We talked twice with many years in between. I didn't get burned the first time--that was another story.

In any event, he talked because he was driving. This meant, though, that he had major connectivity problems such that he really should not have had in the relevant year. He was very slow to look at what I've done, so I got too far ahead of him. At least, that was one problem.

Given that I thought we were communicating, I was experimenting rather than rushing. My experiment was a comical failure. He got very impatient, and it took me a while to realize that. Because I was experimenting, I did way too much work (AGAIN) before realizing that this wasn't going anywhere. He is actually another instance of vanishing when I presented a bill, but I should have known better in this instance.

Mr. Disorganized

I was using a variant of a CMS. If I had simply started writing his system from scratch, it may have worked. I eventually came to realize I'd probably made a mistake the with the CMS, so I felt I had failed. I didn't push to continue when it fell apart. From his perspective, he was paying a lot and not getting far enough. I tended to agree with him. With that said, we were making progress. "We" because I was working with another dev on this. That relationship worked pretty well. I wish we had tried other projects together, but he had a day job and didn't need the extra money.

Between my partner and me, the client bounced 2 - 3 checks. That was the only time in my life I went to a bank to cash checks. Then he just ignored the next invoice, and, once again, I had had enough and wasn't going to push the issue.

He had com problems with employees of his, too. Also, he wouldn't email, and he'd call me at 10am when I wasn't awake. So multiple problems.

no minimum wage

He maybe, barely passed my writing tests, but once again, I probably got on the phone too fast. I managed to get one relatively substantial payment out of him. I should have been using an escrow service from the start. That probably would have solved our problems.

In this case, I needed access to very roughly $750 of his money. I should have asked him at that point for a technical intermediary on his end--someone I could train to push the final buttons such that I didn't need access to that money. It turns out there was a perfect person, but by the time I asked, it was too late.

He harped several times on the notion that I had stolen the money. I kept trying to explain to him in detail how literally anyone on earth could see I hadn't. (That should give some people a big hint as to the nature of the project.) So obviously this was grating to me.

Also, specifically because I was uncomfortable about this situation with the money, I did something meant to retrieve the money quickly. It was not stolen or lost, but it was temporarily inaccessible given the widely published nature of the system that a few 1,000 people used. That is, what I was doing was NOT obscure. That in turn broke something else. He got bent out of shape about this point that was in my mind quite minor.

When I started talking about the second payment, I was already riled and probably wasn't communicating well. I thought he was deliberately misunderstanding me, and I may have even accused him of such. That got him bent out of shape. At least I didn't accuse him of stealing money.

At one point, he went on about what was and wasn't billable. I was talking about such a trivial $ / hour for the very skilled work that I was doing that it pissed me off. I probably would have been satisfied with Waffle House Rock Star Grill Op wages of roughly $15 / hour. He drove me under minimimum wage. Also, at one point I thought he was about to put me on a small salary. Thus, I assumed small amounts of long term training THAT HE ASKED ME TO DO were damn sure billable at a low rate.

I finally did put his system in production to his satisifaction just because I wanted to achieve the goal. Several months later he said that someone who had done that before (I had not.) was impressed with my work. Again, it was probably salvageable, but he's raised my blood pressure and pissed my off.

peanuts - Number One

I talked to this guy several times at 1 - 3am, so it was perfect on that front. Because of this, I accepted an insanely low hourly wage. I specifically told him that I hadn't been an employee in a long time, and that I wasn't necessarily going to be working 40 hours immediately. I also asked for a large degree of flexiblity on hours in terms of time of day. I otherwise explained that it was going to take a while to get oriented. He was fine with that.

It would appears that he didn't communicate this to his Number One. After roughly 2 weeks, I started to feel that I'd actually found something stable. So if you'll excuse the phrase, I went fishing for women. That is, I started looking at singles boards and such. And I found an interesting one.

Around 12:30am Number One essentially asked me where I was. I was not on the clock. He seemed to assume I was working fixed hours and was on call and such. I told him that I was going back and forth with a woman. All of this was in writing, so I'm paraphrasing. I read his response as thinking I was stealing from the company.

I'd have to remember all the details, but, as I've already implied, I was not fully settled on the project. I was settled enough to go on dating sites, but ... Given that I'd already had a discussion with the owner on hours and such, and given the hourly wage, Number One's commentary set me off. This is one of several instances where I sent out an emotionally charged email. If you ever get one from me, you have perhaps 12 - 24 hours to try to calm me down. The owner went a bit too long. I wound up not charging them anything. I was not legally an employee yet. I think the expectation was that I would invoice at least the first time.

disappears on invoice - laborious edition

Got on the phone too fast, most likely. I showed him varoius editions of what he wanted, and he would say, "But can you do blah blah blah." The answer was sure, but at some point you need to pay me for what I've done. Again, perhaps an escrow service would have helped. He vanished when I sent an invoice. But he probably didn't see emails as significant. Again, the email test. It was probably salvageable, but the situation was so awkward I couldn't figure out what to do.

August 21 (at least 4 entries; the first just after midnight, the second starting 03:05am, 3rd at 20:42, the 4th soon after)

part 1, around 00:09

Note that my August 20 entry was posted a few minutes ago. Weeks ago I posted my apprentice ad to Gab. I got trolled for the first time. That's another story, though. I may decide to continue to engage the troll. I have some evidence that there is no such thing as bad publicity in that our arguing keeps attention on my ad. Along those lines, I got a reply. I posted to Gab twice--in May and a few weeks ago / a few weeks after May. This is my first substantial reply.

The question: is "there is an end-goal to the apprenticeship? Will the apprenticeship have the possibility of leading to working for you?"

I'll take the second question first. The positive part of the answer is that would, in theory, be great if you could work for me. A somewhat distant and theoretical goal of having apprentices is to have a development company (or trust) that works during the evening and night. If I train apprentices, their technique will be compatible. The more realistic answer is that if you want to work for me, you'll have to help me find more business. That is one of my motivations to find apprentices--I need help finding business. There are lots of ways to help.

So, yes, that is a possible end goal. More generally, though, I want you to get to the point that you can find your own paid projects or even go get a "real job." I am not certain, but my educated guess is that with just a few months of experience, you'd start getting calls from job recruiters. I addressed this a handful of times in the last few days in response to another potential apprentice. (He was responding to my Craig's List ad.) That is, I addressed this just below in this blog. The relevant entries are non-contiguous, though. The first one was on August 14.

part 2 - starting 3:05am

The Gab potential apprentice and I are going back and forth now. I'm not asking anyone to do cold calls, although if anyone thinks that might be effective, I'd consider a good deal for a commission.

part 3 - starting 20:41

I updated "front man" regarding apprentices and much more than apprentices.

part 4 - a rant about a definitely non-Gab potential apprentice

The potential apprentice from Gab and I are doing just fine. The one I will rant about is from Craig's List. My latest ad expired a few days ago; he wrote a few days before expiration. Given that I'm about to rant about him, that's probably a bad sign. This is similar to "Mr. Zoom," however long ago it was. The situation is salvageable, but I get this sort of thing a lot. I'm ranting as much about the other instances as this one.

I told him I'd posted my August 14th entry below. I said in my email, "Let me know what you think." As far as I know, he didn't write back. There is nothing in spam. I am open to a failure in email, but I have no reason to think that. I had some concern that he mistook my showcase website as being that which he specifically doesn't want to do, so I wrote my 8/19 entry below trying to clarify that a website is an alternative to a corporation or trust. I don't have an artificial entity for him to reference. There are some things I wanted to clarify for anyone in the future, too, so I hope it wasn't a waste. For that matter, I'm trying to clarify this in my own mind. It's been hard to express all my thoughts on how apprentices and I can help each other.

So even though there was silence from the 14th to the 19th, I clarified and wrote him again. I emailed something very close to what I wrote here on the 19th. His reply was, "You've already sent me this and I read it. What backend projects can you help me with?" As I said, he didn't tell me he read it, so why should he get snippy? I have every right to never write him again, but I did, and I spent more time answering his question.

In sales there is a concept of "the call to action." Given that he wrote me, isn't "Let me know what you think" enough of a call? Perhaps not. Even if not, it seems that a lot of people have a problem with the concept that if I email you in the context of an ongoing discussion, I expect an answer unless I specifically say that there is more to come from my end. It's amazing how many discussions just end for reasons that I have trouble fathoming. Are people functionally illiterate? Will they not get off their flippin' phones and type? Can they not type? I believe the current version of my ad on this site mentions typing ability.

Oh yes, I mentioned his answer yesterday, just below. My answer to him was, in part: does he want ideas on what I can help with, or does he have ideas? This seems a reasonable question. As I said, if I start writing ideas I could be at it for hours. He's had every opportunity to look over this site and my GitHub and see that there are dozens and dozens of ideas. I'm not expecting him to do extensive reading like that, but if he's looking for ideas, perhaps he could have taken the trouble to look on his own.

Given his answer, which I'm coming to in a moment, I see the potential for confusion. I thought that maybe he had things he wants to work on, but he has no idea how to get started. Also, I'm not trying to make people do work for me for free as a one-sided deal. I'm willing to help them with their own work.

His reply a few hours ago, "I have no ideas. That is why I reached out to you. Am I wasting my time or are we going to do some work? This back and forth emailing is getting old. / Sent from my iPhone"

In some versions of my ad which are probably fairly easy to find on my site, I tell people it's probably best if they get off their flippin' phones when writing me.

Again, I hope the reader can see that "I have no ideas" is NOT OBVIOUS to me. That's why I asked, very politely. Also, I did give him ideas on how I could help in the general sense, and my 8/14 entry states that assumption as to what he is asking.

One of several points that nettles me is something I've observed too many times before. I hope that in any case of ambiguity, I suspend judgment until I get clarification. It's a red flag for me when people jump to bad conclusions. He's done it twice. Perhaps that should end that. I did write him back, though. I'll get to that in a moment.

I want to address something else that has come up before. "... are we going to do some work?" has come up before for specific meanings of that. And he may not mean these things, but I'm busy ranting, so I'll continue.

I've had one or two apprentices who are quite set on working on stuff I'm getting paid for because it's "real world." They weren't expecting to get paid, but they still wanted to work on something "real." I get that on one level, but to go back to the medieval apprentice model, that's like a beginning apprentice blacksmith demanding to work on plate armor. A blacksmith's apprentice starts with making crude nails.

There is also the complication that my main project is for an attorney and thus attorney-client-privileged information is involved. I'd have to trust the apprentice, and several of them I would trust. Also, I am considering asking to make parts of that open source, but there are some cost benefits to that just from my end, let alone my client's.

I also wonder if this guy is under the delusion that he'd be much help on a "real" project sooner than months from now. He's expecting me to spend that sort of time. I have spent a decent fraction of that time with apprentices, but that's a lot to ask in his 30th word past his initial email. I do need something in return, although that something may be fairly easy to achieve on the apprentice's part. What this guy is asking may not be at all easy on my part.

His 30th word plus he's been ornery twice. I have to like someone to spend that sort of time, and / or think they're going to make progress. I can't even communicate with this guy so far.

My latest answer to him, by the way, was "Anything we do on the back end will involve Linux. I'm running Ubuntu Linux (desktop), so that would be easier. Let me know when you have Linux running, or if you have questions." His installing Linux would go a long way with me.

I could comment further on my answer, but I think I've spent enough time on this rant.

August 20 (starting 23:42)

My potential apprentice answered my question about back end projects. "What backend projects can you help me with?" I read that about 10 hours ago. My thought for most of that time has to been to roll on the floor laughing. (Yes, I am aware there is an acronym for that phrase.) Or perhaps I should cry.

I'm starting to appreciate the front end more and more, but I had to be dragged there. It's where the projects I found led me. I think it's safe to say my heart is still in the back end. So the only question is do you want ideas or do you want to present me with ideas?

If you want ideas from me, I'd have to resist writing dozens of ideas. I'm not even going to start until asked. It's too dangerous in that I could be at it for hours.

To elaborate on your presenting me with ideas, I have done and will do a huge variety of projects, so let's hear it. I'd imagine I'll entertain it.

August 19 (starting around 23:15)

Note that my previous entry was updated a few hours ago, during the night of the 19th.

A potential apprentice wrote me a few days ago. I replied here, but I made one imprecise statement that needs correcting, and I otherwise want to rephrase.

I mentioned a website to showcase his work. That was imprecise. I'll try again:

One way I can help apprentices is to get them work experience. The work does not have to be for me; it can be something you want to do. I can play the role somewhere between boss and thesis adviser. That is, I can make sure you're doing it well enough that it is legitimate experience. I don't have a corporation or other formal artificial entity, so that presents a very small but management complication. One option is that you say you worked for Kwynn.com. I'm fine with that, but I would understand if you don't want to be associated with my louder and louder "irritation" at the powers that be.

The alternative is to create a website that does not have to list me publicly, but I can speak as your managerial reference. The website doesn't have to have much content. Ideally I would want whatever you generate to be public, but, if not, the site only has to make reference to what you're doing.

August 18 - 19: "hello world" for the Alpaca stock and crypto trading API (in progress 00:57 8/18, continued 8/19 21:16+)

updates 8/19

Updates during the night of 8/19: Here is the latest code and what should be a permanent repo. Maybe I finally solved the code-fragments problem.

To add to yesterday's (this morning's) commentary, I was a bit concerned when Alapca said they wanted NodeJS 14+, and the command line shows 12. I added version output which currently shows v16.15.0. NodeJS installs versions of itself in ~/.npm I believe. It likely does this based on the package.x files. Visual Studio Code is smart enough to know which version to run.

original 8/18 - early 8/19

The Alpaca Trading API hello world documentation is pretty good, but it contains an annoying bug. I was following the JavaScript instructions, but the bug may be in all of them. The first trading example is to buy BTC, but if one tries to buy BTC with a "day" "time_in_force," you get an HTTP 422 error. The "gtc" setting rather than "day" works. GitHub says the settings are 'day' | 'gtc' | 'opg' | 'ioc' "Time in force" is a general stock trading term.

I think I mentioned that I held my nose and am using Visual Studio Code, the free and open source version, to debug JavaScript (server-side Node.js). It works. I'll probably live. (Those taking Bill's vax may not live. SatanSoft should be seized and sold off to pay damages.)

I can manage in Node.js, and one day very soon I may really dig in, but for now I still run into unexpected issues. If you want to keep your API key out of the public code, it takes some unexpected syntax. Also, for debugging purposes, the example code needs to be modified a bit to deal with asynchrony.

Once again I'm going to refer to a specific version of code. The code is subject to updates and the new versions being moved entirely. In fact, that is the broken version for purpose of demonstrating the time_in_force error. I just pushed a new, working version that should be easy to find from the above link.

Note how I use module.exports.publicSampleCreds in public_example_creds.js to make the data accessible to the main program. I am not certain this is the exact right way, but it works for now. I'm sure I'll learn more eventually.

Then I write a couple of async functions to make sure the the program doesn't end before I see my output.

Here is more about the 422 error. Note that in the createOrder.then() I need both the success and failure parameters to be able to see the error. The various levels are

Error: Request failed with status code 422
        error.response.data: {
            code: 42210000,
            message: "invalid crypto time_in_force",
        }

Out of (sandbox, fake) money gets an HTTP 403 error, and error.response.data explains this. Oddly, when you put more money in your account, you need to use new API keys, and trying to do anything will get you another 403 error much earlier in the program.

August 14 (first [and only] post after 22:35)

This is to a new potential apprentice. Your comment about websites versus APIs reminded me of an XKCD. Here is the Explain XKCD version, which is more useful for a handful of reasons, and the original. The main text is "If you do things right, it can take people a while to realize that your 'API documentation' is just instructions for how to look at your website." The server side (back end) of a website to a degree is an API. You know the term "back end," and there is such a beast as a "back end developer" who focuses on the back end of a website, which is (in part) the database and more-or-less explicitly an API.

(I doubt I understood the IERS reference at the time. Now I do. That IERS reference is quite funny when one gets it. About 22 hours ago, though, I was writing in my personal blog about how dangerous XKCD has become. Anyhow, those are other topics.)

For the moment I'll interpret your request as "How can I (Kwynn) help you get an 'IT' position?" I'll take this in a handful of steps. We'd have to figure out what project(s) you want to work on. I have lots of of ideas that are back-end only, but they don't have to be my ideas or even anything I'm interested in. You work on projects, I periodically evaluate them, where "periodically" might range from a few times a day to monthly. Then you can list experience working for this site or some new site we create just to showcase what you're doing. You can list on your resume n months of experience working for kwynn.com or XYZ.com or blah.com. I serve as managerial reference to confirm it, and job recruiters will love talking to me; they'll want to recruit me, too. It probably would not take very many months to start getting contacts from job recruiters (headhunters).

That's the first notion that comes to mind.

August 9

Wordle

For keyword purposes: all 2,309 / 2309 Wordle words / word list / dictionary in JSON and raw text.

On a vaguely related note, my 2020 sitemap / XML.

August 8 (last entry roughly 22:30)

ENTRY 1: I created a utility to test my simple timeserver. For my own searching purposes: port 8123, TCP, UDP. (02:11 AM then slightly revised 02:12)

entry 2 - mocking a robot response to my Craig's List apprentice ad

I sent this to him at 9:26PM.

This "guy" (robot who identifies as male) very, very clearly didn't read a damn thing; it's a robot writing. I would like to mention again that I ALWAYS take time to at least make it clear I'm answering that specific ad. It makes this sort of rotten spam all the more obnoxious.

Oh, then his robot writes again 63 minutes later, as if he forgot something. I had decided not to send this to him, but now I'm leaning towards sending it momentarily.

"Ex-NASA ... Developer ..."

I hope that it's not just me who gives this guy backlash for invoking an occupying "government" agency. To name a few agencies, the purpose of the Department of Defense is offense in service to baby-raping demon worshipers. The purpose of the Dept of Treasury is to make us all poor. The purpose of D Education is to make us dumb. Prior to "Covid," the purpose of the FDA was to make damn sure that cancer is not cured. As Dr. Burzynski in Houston discovered the hard way, curing cancer gets you a 57 count Federal indictment. The current purpose of the FDA is to do as much damage as they can with "Covid" and "the vaccine." The purpose of the D Energy is to make damn sure that "free energy" is not released. Ask the late Eugene Mallove of MIT.

The purpose of NASA, among others, is to hide the truth of what's out there. The evidence of UFOs is vast, and NASA continues to deny. The evidence of ruins on Mars is perhaps not vast but compelling, and NASA keeps coming up with swamp gas and weather balloons.

Then there is the matter of NASA facilities used for MK-ULTRA. I believe both Cathy O'Brien and Fiona Barnett talk about NASA facilities used for sexual torture and other horrors of MK-ULTRA. What's up with that?

Then there is the matter of the missing Federal $21 - $23 trillion with a T. Mark Skidmore of Michigan State confirmed it, although Catherine Austin Fitts presented evidence first. As of roughly 2018, Fitts' conclusion was that the money had gone to the secret space program, which is running rings around NASA's 1943 published technology, as opposed to what is still hidden that was running in 1943.

Along the same lines, it is unclear how much of what NASA does is and has been fake. Most or all of the moon landing footage seems to be fake. Earth humans probably did go to the moon during or well before 1969, but not with the technology portrayed.

The jackass' name is a word I know from Iran. Is he a US citizen (leaving aside the complexities of THAT concept)? If not, why was he working for NASA? I believe I have gone on at some length about the Indian "developers" being a matter of treason at high levels.

And he's located in Silicon Valley. What a great reputation they have. DARPA Lifelog masquerading as Facebook, and Big Evil Goo vying for the most evil company in the history of the world, to name a couple of big ones. Anyone pumping out "Covid" and "vaccine" Big Lies is trying to kill tens of millions or many more people.

He mentioned a free speech product he launched. You can't simultaneously invoke Silicon Valley and advocate free speech.

August 2

I think I was about to launch into a new entry on 8/2, then I didn't. This has not been substantially changed since some time on 8/2.

Yes, I am up very, very early by my 25 year usual.

AWS IAM revision

I just revised the AWS IAM policy I created on July 3. I specifically want the AWS dashboard to give me various errors so I know I'm logged in with the limited access user. Today I got some really ugly errors, though, that took up way too much of the screen. I considered giving the limited user all of the "describe" powers, but I happened upon one in the "Visual editor" list, and then the little info icon led to confirmation that it covered the error I was looking for.

The new "power" is "ec2:DescribeInstanceAttribute"; the errors were:

Failed to describe disableApiTermination for instance i-0123...
Failed to describe instanceInitiatedShutdownBehavior for instance i-0123...

July 31

A few hours ago I posted this site's update / change log. The source code is linked from that page.

My blood pressure just went up while cleaning my inbox of calls from a job recruiter. I've already complained about this one, I believe. Several weeks ago I sent him my SatanSoft formatted resume exported from LibreOffice. One of many irritations is that we are 30+ years into the web era, and I am still asked for SatanSoft resumes. I answered his question about rate, citizenship, availability, blah blah blah. Then, like almost all recruiters always do, he went dark.

Note to job recruiters: you lose whatever smidgen of credibility you have when you go dark for 2.5 weeks. "Thank you for your information, but the job is taken" is all it takes. Apparently the job became available again, but I am not jumping up and down to waste more time. It's not just answering his questions and dealing with SSoft resumes. I generally explain that the first thing I care about is the night owl thing. (I'm writing this sentence at 4:23am.) I am very sick and tired of saying that over and over and having it ignored.

I am reading the transcription of his voicemails. Part of my irritation is that I really don't want to have to listen to him. He's Indian, and I really resent that I am dealing with Indians. I also find it interesting that Big Evil Goo Voice's transcription doesn't do as well with Indians as Americans. There must be hundreds of millions of hours of Indian voices to train the "AI" on. I suspect this goes to the "racist robots." Perhaps most Indians simply OBJECTIVELY cannot speak English clearly enough.

There are hysterical articles on "racist robots" / racist AIs. AI essentially proves that racism is objectively, mathematically valid. It seems the AIs have become "racist" on quite a number of occasions. I could go on about that; perhaps some other time.

Anyhow. Good. I feel a bit better now. I did respond to him and sent my resume again. I asked if he wanted me to answer his other questions again. This should be amusing if not blood pressure raising.

(posted 4:44am)

July 27

New utilities:

  1. my own copy of the HTML validator
  2. ipv4 echo
  3. ipv6 echo

July 26

I'm posting my July 26 and July 25 entries at the same time.

When I start a new instance, any sudo command, including the innocuous "sudo echo blah" results in "sudo: unable to resolve host ip-10-1-2-3: Name or service not known" where ip-10-1-2-3 is whatever is after @ at the command prompt and / or the result of "cat /etc/hostname" The solution is to change /etc/hosts such that the 127.0.1.1 entry (yes, 1.1) corresponds to /etc/hostname

The above problem will happen with every new instance. This next problem has to do with a new install of MongoDB. Now "sudo apt update" results in "W: https://repo.mongodb.org/apt/ubuntu/dists/focal/mongodb-org/4.4/Release.gpg: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details." I think I'll sit down and fix that now.

Well, the instructions were close, but it took a bit more doing. The standard but now slightly out of date instructions tell you to create a sources file and a public key file. The key file command needs to be modified as I show below. The command to create the sources entry is fine except that you need to modify the sources file as below, too.

wget -qO - https://www.mongodb.org/static/pgp/server-4.4.asc | sudo tee /etc/apt/keyrings/mongodb-server-4.4.asc
ls /etc/apt/keyrings/
# confirm the file exists
mongodb-server-4.4.asc
head -n 4 /etc/apt/keyrings/mongodb-server-4.4.asc
# confirm there is the beginning of a PGP public key there
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1

mQINBFzteqwBEADSirbLWsjgkQmdWr06jXPN8049MCqXQIZ2ovy9uJPyLkHgOCta
# at this step YOU HAVE TO MODIFY the sources file so that it looks like the following
# the next command simply shows what it should look like when you're done modifying
cat /etc/apt/sources.list.d/mongodb-org-4.4.list
deb [signed-by=/etc/apt/keyrings/mongodb-server-4.4.asc] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse
# old version with my failed attempt at modification commented out below
# note that the signed-by completely replaces arch
# deb [ arch=amd64,arm64,signed-by=/etc/apt/keyrings/server-4.4.asc ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 multiverse

sudo apt update
# now should result in
# ...
# Ign:4 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 InRelease
# Hit:5 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/4.4 Release  
# ...
# All packages are up to date.     

Note that the "gpg" command itself does not come into play. There may be ways to get it to work, but I didn't get that far.

July 25 - a knock down, drag out fight with Apache

I won the fight, but it was tiring. Now that I have a micro instance with 1GB of RAM, I should be able to (more easily) run my own copy of the W3 HTML validator. I was working towards that. My previous solution, months ago, on my local machine, was to run the validator on port 9999 and do all this "Proxy" directive and such. In hindsight, it was a clumsy solution, but that's what "the net" proposed, and it did work.

I decided I want my validator to live at https://example.com/htval. After much gnashing of teeth, I have two solutions, one somewhat more elegant than the other.

To back up a bit, I have restructured my /etc/apache2/sites-available files. Hopefully I'll post the latest versions soon. Earlier versions are in GitHub under a repo with sysadmin in the name or something. I separated the files into .conf and .inc files. .conf is the Apache format for config files. .inc is my arbitrary format meaning "include" files. The .conf files enable a specific site. The .inc files are included with, not surprisingly, an Apache Include directive.

I have a "private...inc" that uses the Define directive to define the server name and document root. That is, the Define var becomes the literal ServerName directive / variable, and the DocumentRoot. (Actually, I am still using /opt/blah for the DocumentRoot, but it could be done that way.)

For the local copy of kwynn.com, the kwloc.conf only has 2 lines that are the private..inc Include and a "pubcommon.inc" file. The pubcommon defines the non-secure http redirects and DocumentRoot and various other directories (Directory) and Alias(es) and such.

The following Location entry can be included from anywhere in the common file or the .conf file. The following is my second and somewhat more elegant version because it can be defined anywhere.

First I'll explain my solution, then I might get around to explaining the various difficulties I had to pick through.

I post my earlier "Directory" version further below. Note that the context of what you get from the URL changes between Directory and Location. That is, you're getting a different string into the regex and thus the regex must be different.

The biggest problem I had was a URL of /htval versus /htval/ . The Nu validator uses "style.css" rather than "/style.css" as the path. Given that just about everyone is going to access their system indirectly rather than through their default 8888 port, I really wished they would have used the /. Given that it's open source, I'll add it to my stupendously long to do list. That is, I should lobby for the change and / or make it myself when I have time for such things.

Anyhow, the first rewrite rule turns /htval into /htval/ This should NOT use the P (proxy) option. The proxy option does not change the URL, but we need the URL changed so that Nu interprets style.css as /htval/style.css or /style.css, depending on how you want to look at it.

The second rewrite rule solves a somewhat puzzling problem. Even though there is an index.html in Nu, trying to access it directly results in a 404 error. If you want the Nu home page, you have to ask for precisely / . The third rewrite is a fairly standard rewrite of the rest of the URL.

I can't remember if 302 is the default over 301 or not. In any event, I think it's best to explicitly use 302 (temporary redirect). I guess once you're certain it's working it doesn't matter, but I think it's best to err on that side. Maybe in a few months I'll change it.

Remember that if you do use 301, if you want to change anything you'll have go the History in Firefox and "forget" the site to make it stop using the 301. Also remember that you'd better do a hard refresh when testing: control - F5 in Firefox. Also also (sic) remember that you have to restart or reload Apache with every change.

I don't claim to fully understand the L (last rule to execute) option. It works. I'll leave it.

This is the sort of development (sysadmin) that you literally have to take one line at a time and make sure the first one works before you keep going. My earlier versions dealt with the Proxy and ProxyReverse and all sorts of unnecessary code. I was changing too many things at once. I was previously using port 9999 and redirecting to 8888. I was using something like 15 lines. The following is a huge improvement on a number of points.

I thought I was going to show the code then explain it, but I did it the other way around.

# begin included file
<Location /htval>
	RewriteEngine On
	RewriteRule htval$   /htval/ 			 [R=302,L] 
	RewriteRule htval/$ 	http://localhost:8888/ 	 [R=302,L,P]
	RewriteRule htval/(.+)$ http://localhost:8888/$1 [R=302,L,P]
</Location>
# end included file
    

The following earlier version was in the Directory entry of my DocumentRoot directory.

RewriteEngine On
RewriteRule ^htval$    /htval/ [R=302,L]   # do NOT use P for this!!! ****
RewriteRule ^htval/$ 	 http://localhost:8888/ [R=302,L,P]
RewriteRule ^htval/(.+)$ http://localhost:8888/$1 [R=302,L,P]
    

July 24 - a second, recent kwynn.com instance / MongoDB upgrade

After getting the new home of kwynn.com working, I realized I was still using, rather embarrassingly, MongoDB v3.6. It seems that hasn't bitten me yet. I decided, though, not to wait around for it to bite me. Locally I am running v4.4 which is the last of the v4.x. Version 5 has been out for a while, and they are up to 6. I mentioned months ago that I cannot run v5 locally because my CPU is too old. The good news is that version 4.4 reaches end of life in February, 2024. This situation irritates me, but if I solve my basic problem of finding enough work, everything else gets solved. It's probably not worth even digging at a Raspberry Pis for the moment.

In any event, kwynn.com already has another new home over the course of several days. This time I set the time to live (TTL) on my IPv6 AAAA records to 1 hour, very early this morning. Just as I sat down to create the new instance, I set it to 5 minutes. Once the new instance was ready, I gave the new records a 5 minute TTL and left them both "on." I found that it's probably a good idea to close one's browser after 5 minutes because "old" tabs (20 minutes old or less) will still go back to the old address.

MongoDB v4.4 would not auto convert the old data files. I think I went back to 4.2, but I don't think I went back to 4.0. I may not have taken the time to realize that 4.0 existed. After the second version failed, I did this:

  1. Export (mongodump command) the data from the earlier version
  2. stop the server
  3. remove all the files under /var/lib/mongodb (Don't remove the directory because Mongo will not necessarily create it.)
  4. upgrade Mongo (and make sure it's running)
  5. load the data (mongorestore command)

The careful and knowledgeable reader may have understood the context of the instance change, but I'll clarify. I made a no-reboot image of my live instance, fired it up on a c5ad.large instance, did the Mongo upgrade, and tested everything with a subdomain of an extra domain I lease. Then I made an image of the upgraded system, loaded it as a t3a.micro instance, tested again, and then created DNS AAAA records for the new system. I watched for signs that the browser was picking up the new records at least part of the time, then I turned off the old records and switched the IPv4 via the AWS EC2 Elastic IP. I watched the web server access log of the old system to make sure requests stopped coming in, then I shut the old (virtual) machine down.

July 23

I originally posted this around 13:48. At 13:52 I added one word to my email to him and emailed the jackass. I added the word below, too.

In my own pure tech news, the new instance of kwynn.com went live last evening around 8:29. It's run fine since, and the CPU credits are still charging. I should make some more notes on that, but I have other topics I want to address.

In non-technical news, I was considering beginning my job application to be an influencer for the Russian government. I'm somewhat serious about that. Unfortunately, I should deal with a certain jackass first.

the latest robot script kiddie jackass

Several days ago I again posted my apprentice ad to Craig's List Atlanta computer gigs. I got two legitimate responses, and both showed an initial handshake. They have petered out on the other end, though. That's another sub-topic, though.

I also got a blatantly robotic response, and today I got the 6th email from the robot. That's the 6th email since early on July 18.

First of all, the email prominently mentions Unix. Linux ate Unix' lunch by 2004 or earlier, and now you'd be hard pressed to find a true Unix. The iCultOS is based on the Berkeley Standard Distribution of Unix, but I don't know how much it's been mod'ed. Besides, only a handful of people know that.

Oh, crud. I made the mistake of going to the custom domain name / site of the email address. Now I have to mock it, too. First of all, it doesn't have a security certificate. I didn't have one, either, until recently, but since then I've spent so much time mucking with them that I can mock anyone who is as pretentious as this jackass. It's not installing the certificate in the live environment that is hard, either. It's getting it working on various test systems that's somewhat harder.

The site prominently features the standard photo of a diverse group of 7 people. It would appear that the robot's script kiddie is diverse, too. It's a tiny photo, but he looks black (50 - 70% black ancestry, at a guess). A friend of mine is a 25 - 45% black software developer. We actually haven't talked shop that much because our mutual interest is not technical, but I have every reason to believe that he's at least competent, and he may even be quite good. I conclude with some confidence that he's not a diversity hire. I'm sure the diversity helps him get hired, but he doesn't need the help.

The script kiddie jackass is using a Scottish given name, but he's probably black, and he says he works for $10 / hour. When Scotland regains independence in a few years, they should sue this guy for impersonation and reputation infringement and such.

I sometimes spend over an hour composing responses to CL ads. I have come to realize that this is almost always very stupid. However, as with the SSL cert, I get to mock "I have a few key questions I'd like to ask you about your project. I have given it a lot of thought. (emphasis added)" I suppose his generic questions are good ones, and perhaps I should even read them in full, but they are clearly generic, and he has given my ad no thought at all. I might find a way to work with him if I weren't so damn irritated, and I have little reason to believe he's competent.

Actually, his questions are good ones. I should even consider writing answers publicly, not to him.

I suppose that will suffice.

On second thought, I should compose my response. He already has my real spam-catcher email address, so why not mock him directly?

Dear script kiddie jackass running this robot, if indeed there is anyone as biologically living as an ass,

I have spent two hours on some Craig's List computer gig ad responses, probably longer. That is admittedly stupid on my part, but I really resent it when you spend no time at all. The statement "I have given it a lot of thought" is extremely irritating when you have given my ad in particular no thought at all.

I have other complaints / mockery that I have written out elsewhere. That will suffice, though. Get your robot to stop emailing me unless it's with a grovelling apology.

July 22 - yet another chapter in the upgrade

I'll try again to recap what needs doing for this and future upgrades. I may be a few hours from throwing the switch to my Ubuntu 22.04 image.

When doing the AMI (image) quick launch from image ("Launch instance from AMI"), the IAM role is under "Advanced details" > "IAM instance profile."

As I approach go-live:

  • Using the entry in /etc/hosts, I should be able to test kwynn.com on the new machine before it gets DNS routing. I already have a copy of the SSL cert. (Yes, this works.)
  • Give the instance some time to charge up CPU credits.
  • quick link "cpu" must be tested on an instance that has limited CPU credits
  • rsync to new machine
  • update /opt/kwynn if needed
  • reset /var/log/apache2/access.log and error.log as to not conflict with the last entries of the old one.
  • test all services in launch-wizard / security group / firewall rules, including 4 and 6 ping

General testing / steps:

  • update composer libraries both locally and in the cloud - update ALL libraries
  • test my contact form
  • go through my quick links on my local machine - it seems I was very bad at that
  • the web random generator (started 2017) tests nanopk() and nanotime() and such
  • remove older keys from ~/.ssh/authorized_keys
  • set up the Apache conf files to test both the test name and permanent name

certbot

sudo certbot certonly --dry-run --apache
sudo certbot --apache
    

It's probably easiest to let certbot mod the Apache conf files as it wants, then go back and set them up for both test and live domain names.

July 21 - Kwynn.com Ubuntu 22.04 upgrade update

part 2 - PM

That AWS EC2 serial console feature is pretty cool. I'd say it took 45 - 60 seconds from starting the instance until I started getting data. Unfortunately, for this first run, I have once again run into, "A start job is running for Raise ne…k interfaces (5min 28s / 5min 40s)." Interesting. Once that reached 5:40, it fully booted. Once I say the basic network messages, I could ping but not get web. After 5:40, I got web.

Update composer libraries in the event of weirdness: /opt/composer$ composer update
/opt/composer is where I keep my composer libraries. Note that you have to "update" rather than update blah because if you update individual packages, the dependencies will not carry through. If you're in the equivalent of /opt/composer then "composer show" will show you all the versions. You can compare dev to live.

part 1 - AM

New update 02:38. My nanopk extension does need installing in PHP 8.1, but it doesn't appear that is causing my problem with my web random generator. That might indeed by all for tonight.

First posted around 01:20, then updating 01:36. Actually I'm posting the 2nd version around 2:04am. I may be losing some coherence.

Going back to my June 10 entry, I just changed this on kwynn.com, so keep it in sync!!

To elaborate on a comment below, which I wrote earlier, I have to test it with a micro instance anyhow because that is part of the point of the operation.

This is the bug that I sometimes get with network interfaces, or something like it, "A start job is running for Raise ne…rk interfaces (5min 6s / 5min 31s)" I'm not sure why the text is garbled. This message is repeated dozens of times.

I am correct that it took 5 - 10 minutes to see the log. I'm rebooting at 1:43 to try again.

"For boot or networking issues, use the EC2 serial console for troubleshooting. Choose the Connect button to start a session." That needs to be set up separately. There is some security button to push. I should look into that. Ok, it's one little button to push. Off hand, that's not getting me any more data than the system log does. At 1:49 I rebooted again with the serial connection open. This time I see the traffic.

The serial console shows it's working, and then ssh worked. Oh, I forgot to use my own "launch wizard" security group, so it won't receive ping.

Note to remove unwanted keys from ~/.ssh/authorized_keys. When you create a new key, AWS simply appends it to the existing files. If you came from an image with another key, that key will also still work. The keys are nicely named, though, with the name of the PEM file, so it's easy to tell which one to remove.

I learned a trick with Apache config variables that almost does what I want in terms of abstracting the domain name. I need to abstract it so that I can test a domain with an SSL cert and then turn it into kwynn.com. I'm using a "Define," but that isn't quite what I want. I'm looking for a way to use ServerName later in the config file. No luck so far.

No matter what I do with the above, certbot is going to bulldoze it if I let it, and letting it is easiest. So the answer is probably to let certbot bulldoze the SSL config, then come back with the variable and test it. Note to self to capture those certbot commands.

I should test on a micro instance because the compute instances don't have a CPU quota.

Remember to disable PHP 7.4 (or whatever) and enable the latest version as an Apache mod.

This attempt stalled out because I have to upgrade my nanotime PHP extension. That isn't hard, but I think I've had enough for the "day," or at least for now.

July 20 - database theory

I have updates from my "usual apprentice" who got a title something like that about 8 months ago. For about 10 months he's been working for one of the big tech companies. It's not one of the really big household names, but it is very famous in tech circles, and it was a major power decades ago.

His first job did not go very well, but he transferred within the company a few weeks ago. Now he's very happy.

His new job led him to some questions about databases. They were of the sort that I suppose should get a blog entry, based on our tradition months ago.

He's asking about ACID and normalization (3rd normal form). "[Do] you apply these principles when doing database design[?]" Yes, I do, with a lecture and perhaps a long lecture.

I'm getting somewhat away from the question, but ultra-normalization is one of the designs that drove me crazy with Drupal. I guess it's still driving me crazy because I'm still writing the migration scripts from Drupal MySQL (MariaDB) to MongoDB. Drupal version 7 has this concept of a "field collection" which is one of the more perverse designs I've ever seen.

Thankfully there were only 2 field collections before I got involved with the project. The most annoying one contained the main billing fields--which legal case the hours are billed to, notes on what was done, the broad category (jury trial, trial prep, writing legal briefs, interview with client, etc.), and then the hours billed. (There are other fields that made for an interesting design spec from my client, but those are the basics.)

I'm not going to take the time to think through the formalism, but the field collection is probably very correct in terms of 3rd normal form. There are likely other ways to have achieved that, though. What I had to deal with was a case of people following rules in an inappropriate context. Although that isn't even quite correct. The data format (schema, design) worked well enough for its one intended purpose, but the problem was that I needed to use the data in another context. The main purpose was to interact with the data on the screen. I needed to do batch, back-end-only operations, though. It took insane queries to get at the data directly with SQL.

That sort of problem is one of several reasons why I say that CMSs are only worthwhile if they can do 90% of what you want with the core product plus existing plugins. Drupal did maybe 40%, and some of that it did badly.

A large part of the problem is that Drupal needed to be very, very general and handle any type of data. This goes to my saying about CMSs that they are so general that they don't successfully do anything specific. (The saying is based on a comedian's saying from decades ago. I think I've referenced that elsewhere.) Whatever benefit Drupal gave to get the project started, it's on net cost the project an enormous amount of money as I undo the damage of starting with Drupal.

One point being to think through all the things your data needs to do and the ways it needs to do it. If you're first designing for user interaction, test the back-end queries you may need to use to make sure that your design covers both scenarios.

To head back towards the original question, on one hand, I barely remember the formal definition of 3rd normal form. The saying I learned was that the data in a table should be relevant to "the key, the whole key, and nothing but the key." "Key" in that sense means the unique index, which may be several fields.

It's been a long time since I did RDMBS table design because I've done all new work in MongoDB for 5+ years now. One of the benefits of MongoDB is that you don't have to worry about the design while you're prototyping. You don't have to "CREATE TABLE" in MongoDB--you just toss data into a collection. That has the theoretical potential of being a horror show, but my RDBMS training helps me stay disciplined. The biggest problem I've had is that I make mistakes around integers coming from an HTML form as strings ("3"), writing that string to MongoDB, and then that fails a query match with integer 3. That zapped me enough times that I get more and more disciplined about always putting my IDs through a validation function. I have one in my client's system called vidord()--valid ID or die.

I should add that there is a question whether IDs should ever be integers. You never do math on IDs, so why should it be a number? A system may be more flexible if it doesn't require integers. First of all, I'm not talking about the '_id' in MongoDB. The '_id' is the required unique index in MongoDB. That is almost always a string both in my case and by default. I tend to use separate IDs for querying purposes; I'm talking about that separate ID. In my systems, I make the '_id' a human-readable string so that I can see what's going on at a glance in the Robo3T MongoDB tool. Robo3T is a direct equivalent of MySQL Workbench. By default, the rows (documents) list themselves by _id in Robo3T (or the command line), so it's nice to have something human-readable. I can make my own IDs unique indexes (keys) with createIndex(..., ['unique' => true]).

In any event, Drupal used sequential integer IDs, and in this case I don't see a problem continuing that. For one, it would be more difficult than it's worth to change it, even though I'm migrating to MongoDB. For another, I got burned decades ago by inflexible numeric IDs, but this is my system with a specific purpose; I don't need to be general.

The integer IDs in MongoDB brings up the ACID issue. The question of sequential IDs in MongoDB was a thorn for a few years. That doesn't mean I spent 4,000 hours trying to solve the problem, just that it rolled around in my head on and off for years. MongoDB does not have a native AUTO_INCREMENT, nor can you lock a collection (table) like you can in an RDBMS.

AUTO_INCREMENT is a built-in guarantee of atomicity, the A in ACID. The RDB takes care of the potential race condition for you.

In MongoDB, the answer is most likely to avoid sequential IDs *IF* you're starting from scratch. Otherwise put, if you're starting from scratch, you might as well use the required _id, although I think it's a good idea to add human-readable elements as opposed to the default _id (a hex number that turns out to have useful properties, but that is not at all obvious, nor is it at all human-readable).

As I said, though, I am going to continue using integers with my lawyer client's system. There are a number of decent ways to solve the problem in MongoDB. The one I'm using is to do a Linux level semaphore lock using the DAO file's path to ID the lock itself. I do all the processing I can without the ID, then I get the lock, find the max existing integer with a findOne sorted by descending index, add one for the new ID, write the new row, and then unlock. It's a really good idea to already have a descending-sort unique index on the ID field anyhow, for data integrity purposes. Thus, that query will always be as fast as it can get.

To go back to the original question again, yes, when I was using MySQL I thought about what needed to be a bridge table (many to many) and otherwise how to arrange the data. Something close enough to 3rd normal form was burned into me decades ago such that I'm not consciously thinking about it, but yes, I am unconsciously thinking about it.

Another benefit to MongoDB is that in many cases you don't have to normalize. The fundamental structure is different, or at least it *can* be different. You could write MongoDB collections just like RDB tables. MongoDB does not have referential integrity constraints, though.

You don't have to normalize in the sense that often you can get the data you need in one document without the equivalent of joining and pulling from related tables. Put another way, I moved to MongoDB in 2017 in part to avoid normalizing.

With that said, a few weeks ago I found myself normalizing in MongoDB, and I'm probably about to do it again, both within my lawyer client's project. I recently posted a drag and drop example where the ordering is kept as a float. I decided that the ordering table (collection) should stand on its own rather than being part of the billing data collection. This goes to "the key, the whole key, and nothing but the key." The billing data is associated with a bill line item ID. The ordering is based on the ID of the whole timecard because the order represents the order of the line items on the screen.

It would probably take me a few minutes to think through why I decided to "normalize" the ordering, but not now.

In some cases I've needed to consider lots of data coming from an HTML form nearly at once, and I had to consider how to reject network packets getting out of order. In other words, I don't want older data to overwrite newer data. I use HTML "data-blah" attributes / JavaScript DOM element.dataset.blah variables to add both a sequence and a ms timestamp, from the client's clock. I know for a fact that two keystrokes when typing can get the same ms. If you go read the fine print of JavaScript threading, I *think* with the sequence you are safe from race conditions. For further projection, though, I also have a feedback loop that doesn't turn the field green ("OK" color) until the data that the database wrote matches the current value of the field. In some cases I wait until 2 seconds after typing stops and then check one more time.

So, that is all to say that yes, I consider normalization and ACID, even in MongoDB.

One day perhaps I'll add links, but I should at least list all of the solutions I came up with to the MongoDB sequence problem. They are all in my GitHub, but they are scattered among several repos and obscure corners thereof. You can easily write a "find one and increment" in MongoDB. That process is different than an AUTO_INCREMENT, though. It's a loop that runs either once or twice. I don't remember that exact logic, but you are guaranteed to get the next sequence the second time around if not the first. I ran it with all my cores at once and proved the before and after. It works, and the logic is sound, too, although I don't remember exactly what it is right now. This solution assumes that you have a document whose only purpose is to keep track of the next integer.

The "before and after" means that without the loop, the sequencing would trivially fail with all cores banging. With the loop, every number between 1 and several million were assigned.

As above, the Linux level semaphore works as along as all code doing the increment is using the same lock, which in my case is the same file of PHP code.

I wrote a PHP extension to get each core's clock tick plus the core ID. That will get you unique integers that always ascends (between boots), but they will not be sequential. The clock tick resets with each reboot, so you need to add another field such as ns time or even s time. Second time will work fine assuming your system cannot reboot and start running the application within 1 second. To be unique among different machines or VMs, you'll also need a field to ID the VM.

As I remember, if you're banging away with all cores in a tight loop all trying to get the next sequence, the core tick is much, much faster because there are no locking issues.

That's plenty enough nattering.

July 18 - Kwynn.com system upgrade, continued

I think I started this on June 10. I started again to upgrade to Ubuntu 22.04. I found 8 links in my quick links that were hard-coded to kwynn.com. That is usually a bad idea. While I was testing that, I ran into just the sort of problem I was looking to check: the sunrise function in PHP is deprecated, so I have to fix that.

July 16, 2022 - I crashed this site this morning

It seems that the latest updates of Ubuntu 20.04 and / or around the 5.15.0-41-generic kernel finally exhausted the 0.5 GB RAM of a t3a.nano instance. Ubuntu 22.04 server requests 1 GB RAM, but I didn't go back to look at 20.04.

In the AWS EC2 web instances list, if you're on the instances list with an instance checked, or else drilled down on a specific instance, Actions -> Monitor and Troubleshoot -> Get system log is helpful. It seems that it takes several minutes to update, though. The message
"[ 3.159381] Out of memory and no killable processes..." is a big hint, but it was somewhat hard to find. The last message,
"[ 3.169146] ---[ end Kernel panic - not syncing: System is deadlocked on memory ]---" is less clear, but it helped me figure it out.

I almost always reboot by creating an image. By default, creating an image causes a reboot because then the image can be completely consistent because it's not a running instance. (You can change that option in the image screen.) It was very handy to have created an image because I picked up where I left off, at least in terms of data. I upgraded a number of items while I was at it, but I'll come back to that.

I have found that something like 1 in every 100 boots goes badly for who knows what reason. I've found that most often on desktops. With EC2, sometimes boots go badly due to the network card not syncing correctly. That happens once every year or two.

In any event, when several minutes went by without ping starting to work or any other sign of life, I first tried a combination of rebooting and stopping and force stopping and then rebooting. Given the log delay, I eventually saw that I was going around in a circle with the same failure.

Then I tried launching an earlier image. That worked. I started upgrading the software of the image to see if I would indeed have the same problem. I eventually confirmed that I did. At one point, the update took a very long time--maybe 7 minutes. Once upon a time, some types of instances started with CPU credits. The t3a.nano and t3a.micro do not. That was probably part of the problem, but I think the settings were such that I was simply charged a few cents for the extra CPU rather than the system slowing way down waiting for the credits to rise. In any event, I seemed to be pegging the CPUs (I didn't even run top to be sure.) Although I may have been pegging the EBS disk. See below.

I started a c5ad.large image to see if the horsepower (a funny image) would help. Then I think I was up against EBS delay because it took a long time there, too.

I think I found that the upgrade worked on the .large, so right around that time it occurred to me to boot the image I had just made with a t3a.micro with 1 GB RAM. That worked and had all my data, minus a few minutes when I think kwynn.com was pointed at another image.

I was down for roughly 30 minutes. I could look at the web logs to be sure, but meh.

Lesson learned: Consider your DNS TTL (time to live) settings, especially for IPv6 in the case of AWS. More below.

Lesson learned: Periodically clean up the various "Security Groups" that crop up even with light usage such as mine. The have names like "launch-wizard-22." Those are the firewall rules. By default, one is created with each instance launch. I was having mild trouble finding / remembering which one I was actually using and selecting it. They don't let you delete the "default" even though one is almost never using it. Then label the remaining ones by clicking in the "Name" field that is "-" by default. I just took my own advice and deleted roughly 23 of them and labeling the one "live20to2207etc".

I add this because selecting among the security groups while I kept launching instances was very annoying.

Lesson learned: When you select an image to launch, the quick launch screen does not by default prompt you to select a role. It's in the advanced screen. This is generally not a big deal. It just affected my CPU usage widget. A role gives an instances permissions like a specific user, or a role can have an IAM policy, where IAM is a very fine grained permissions mechanism in AWS.

IPv6 / AAAA DNS record TTL

In an effort to be kind, until this incident, all my DNS settings had a 24 hour time to live / expiration. For 32 bit (IPv4) addresses, the Elastic IP address can be pointed at any instance you want. Years ago, I had some trouble getting IPv6 to work at all. Once I did, I never looked into assigning specific addresses equivalently to Elastic IP. I just booted up and then got the IPv6 from ifconfig or the instance dashboard.

It took IPv6 about 17 hours to refresh. Lack of a working IPv6 did not seem to hurt anything; browsers / DNS services just use what's working. I still found it disconcerting and was very happy to see the browser finally use IPv6 again.

items to research

  • Is there an IPv6 equivalent of Elastic IP? Perhaps a 1 IPv6 address subnet?

upgrades

One bit of good news was that I upgraded to 18 GB, and I think it's called gp2 as in general provisioned SSD, version 2. gp2 is I assume a faster SSD. Now I'm that much closer to my upgrade to 22.04, which I wrote about a few weeks ago, below.

pro solutions

In the "real world," if I were running a larger operation, the answer would be to have several machines running a site. And / or to move back and forth between 2 instances as one is rebooted.

The point being that I know how to prevent that sort of thing happening to a more important site. I'm not sure how many more measures I'll take, though. I have other fish to fry.

final (?) thoughts

There is probably more to say on this, but I'll post this.

July 11, 2022

I expanded my June 11 entry into more front end examples.

July 4, 2022

part 1

I am adding a forward to my May 30 "Mr. Zoom" entry.

part 2 - more ranting about those who ignore the night owl request

In the last few weeks, several likely American job recruiters have acknowledged my night owl request in a useful way. For that matter, so did a lady who is probably Arab by both lineage and location. By contrast, I've had 2 - 3 developers tell me that dealing with Indian job recruiters is pointless, and they all have less reason or far less reason to be "racist." (Depending on what one means by the term, I am to some degree unashamedly racist.) So perhaps I am a fool for even engaging them.

One Indian recruiter made a vague reference that may have been an acknowledgement, on a voicemail. He won't acknowledge it in writing, though, despite my having gone into some detail. He's not responding to my emails at all, in fact; he just keeps calling me. The number he's calling is meant to be a tar pit for job recruiters. It's not set up to ring.

In roughly 2013, I posted my resume to Dice.com and got 30 - 40 calls the next day. I will go batpoop if people did that on my cell phone. I am not talking to a recruiter until they clearly acknowledge the point and indicate there is some small chance I can get what I want. It seems, by the way, that Dice has now been overrun by Indians. That's why I'm not making a live link.

The tar pit is a Big Evil Goo number. If Goo wants to suck in Indian job recruiters for "free," why not let them? One interesting point is that Goo's voicemail transcription doesn't do Indians very well. I could go on about that a some length, but perhaps not now. I got one transcription that started with "This is God." It would be disturbing if God speaks to 3rd+ generation Americans with an Indian accent.

Anyhow, this latest guy might have acknowledged my point in his first voicemail, but I can't make much out of the transcription of the second one. I haven't listened to it yet, for at least two reasons.

His accent is fairly understandable. It seems unlikely that he could speak English well enough but not write it. I am tempted to inquire whether he is literate, although I have wondered that about quite a few Americans, too.

I'll stop this entry. The main reason I bring it up is because I updated the May 30 entry on the same topic.

July 3, 2022 - creating an AWS IAM user to stop and start a test instance

I left my main client's test machine running overnight. This cost roughly 20 cents, but it's the principle. I get a bit paranoid turning it off for fear of turning the live machine off. Note that you can easily "Name" instances (virtual machines) on the far left side of the instances list. Just hover over the space and you'll get an edit icon. Naming "live" and "test" helps, but it's still not good enough.

My IAM solution was not all that I hoped for, but it does the job. If I had enough budget, the answer would be to write a program to automatically shut it down and use the same IAM policy.

You can use the visual policy maker, but note that some of the relevant permissions can't be restricted by a resource such as an instance ID. If you try to restrict them, the policy will break. This is what I came up with:

Updated August 2 (notes) - added "ec2:DescribeInstanceAttribute"

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:StartInstances",
                "ec2:StopInstances"
            ],
            "Resource": "arn:aws:ec2:us-east-1:123456789012:instance/i-abcdef012345"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:DescribeInstanceStatus",
                "ec2:DescribeInstanceAttribute"
            ],
            "Resource": "*"
        }
    ]
}   

The permissions with the "*" resource are the ones that can't be restricted. A login user with that policy can see the instance list and stop and start the relevant instance. One can also see other instances in the account, and the "stop" button is there, but pushing the stop button leads to an error; in this case, an error is the correct answer.

When using the limited user, note that you'll see all sorts of "API Error" messages and other red-colored errors because the user has very limited access. It also helps to click the instance you want, and then you can stop it. That's probably a good idea in any event--using any sort of user.

June 18, 2022

I made 2 changes to my June 11 "job / gig application" entry. I added "project flag" as an example, and I removed my defense of a usage of "he." I comment on that removal on my personal blog.

note on June entries through late June 17, 2022

As of June 17, 22:50 my time, I hadn't posted any June entries until a few minutes ago. I left them with their original dates, though, as they sat lonely on my SSD. I had backed several version up to GitHub, though. As I note and link to at the top of this page, I sometimes save drafts there.

June 14, 2022 - ranting about the "real world" job hunt

>> [my email] 9:01 PM

>> I started my work "day" at 7:30pm. I am a night owl and need to minimize work during business hours or daylight, for that matter.

> [recruiter's email] 9:15 AM

> Would you be available for a quick call in the next hour or so[?]

No, I will not: neither today nor any other day at such an hour, unless you have a relevant answer to my point. Even if you did have an answer, it would have to be quite convincing for me to consider talking to you before 10:15am.

This guy was Indian, although American recruiters have rarely proven to be any better.

June 11, 2022 - This is a job / gig application.

Update: a month later I expanded this more front end examples.

"Please send at least 5 examples of [relevant] sites you have made, or worked on stating what your role was."

EXAMPLE: For the last six years, 5 - 10 hours a week, I have been developing and enhancing a web application that is a literal digital assistant for a law office. I say enhancing because I didn't start the project, but I've reworked it so much that I can call it "mine." My work on the system is cheaper than hiring a human assistant, and it makes a human assistant unnecessary. It's Linux, Apache, PHP, and started as MySQL. I have been moving the data to MongoDB. Soon I might get around to moving some of the code to Node.js, which is more compatible with Mongo than PHP is. The application is hosted on AWS EC2, so that makes me a perhaps-beyond-full stack developer down to the Linux sysadmin level. The system started in Drupal, but I am liberating it from Drupal. More details are on my resume.

The job description on hand "might" require references. My lawyer client will certainly serve as a reference, and I can think of several others.

Regarding troubleshooting skills, I did that more or less full time from 1997 - 1999, and those skills have only gotten better. I've done plenty of troubleshooting since.

Given the zillion lines of code I have in GitHub, I think that demonstrates I can use Git.

I list some (client-side) JavaScript examples below. In part to demonstrate that I am not delusional, as I implied, I tend towards full-stack business web applications rather than front-end "make me a pretty, public-facing website." I make no claim to be any sort of visual artist. There have been a number of times, though, that my "art" was close enough if the client knew more or less what he wanted to see. I have zero to little original artistic vision, but I can at least sometimes implement another's vision.

EXAMPLE: Speaking of artistic vision, I'd like to think that my numerology calculator "icon" is somewhat pretty:

Numerology Calculator
5345963677 3133331269

That is HTML, not an image. It's also a live link to my numerology calculator. I haven't substantially touched it since 2010, but it's a useful piece of JavaScript and is written to an objective standard.

Note that I take neither credit nor blame for any visual aspect of my father's website mentioned below. I did NOT create the site and have done minimal work on the front end, but I've done quite a bit of work behind the scenes keeping it going over the years and almost decades. The numerology calculator on his site got caught in the visual crossfire between my original and the WordPress "developer." Again, I take neither credit nor blame.

regarding WordPress (part 1)

I've fixed a number of problems and added a few features to my father's WordPress site over the years. EXAMPLE: All the work I've done on his site is listed as various entries on my resume as "project Numbers." I very specifically avoid "WordPress." I understand this job in question right now involves WordPress. I will come back to that in a moment. I want to show more stuff before I very carefully address the WordPress elephant.

EXAMPLE: The "flag project" parts 1 and 2 were visually demanding for my talent, or lack thereof. A client gave me a very detailed visual spec, and I did it such that he paid me for the mockup.

EXAMPLE: I hope my clock is a nice JavaScript example. There is some work on the back end of that, too. As I argue at some length, it should be a very accurate clock, and I make arguments that it's probably better than the official US NIST clock.

EXAMPLE: I wrote a Firefox extension for Facebook years ago. I didn't maintain it for quite a few reasons, but I believe I made my case (and the source code is still there) that it worked quite nicely, and it wasn't easy to accomplish.

EXAMPLE: In April, 2013 I wrote a Chrome extension that plucked phone numbers, email addresses, and snail addresses out of GMail and put them in Google Contacts.

EXAMPLES: my quick links / favorites. Roughly half of those are little, or not-so-little, applications I wrote. Some of them are just links to external sites.

EXAMPLE: Regarding Bootstrap, here is an example where I boiled down a zillion characters of Bootstrap CSS until I had what I needed. I was horrified at the thought of all that CSS for one example.

WordPress, part 2

The job description I'm answering eschews "agencies" (recruiters) and wants freelancers. I took WordPress off my resume because I don't want the sorts of companies that use recruiters to contact me about it. So far, I have not been a fan of WordPress, but I'd much rather freelance and do WordPress than work 9 - 5 doing anything.

I should remind myself that WordPress is open source, it uses an open-source language and database, and it runs on Linux. There are almost infinitely worse technologies and companies to work with. Also, if I am around WordPress fans, I might question them and see what they have done in it and come to appreciate it.

I could elaborate on WordPress, but I should let it rest for now.

June 10, 2022 - moving my AWS instance

I'm in the process of moving this site's AWS EC2 instance from one EBS block to another. My goals are twofold. One, I am complying with my own dev rule #2 that the computer I'm typing on should be as close as possible to Kwynn.com's system. My site's Ubuntu version is getting behind, and thus so is my PHP version. I'm typing locally on Ubuntu 22.04 with PHP 8.1, and my site is on 20.04 and 7.4. My other goal is to increase my disk space. I am running on 12 GB right now, and that's getting too close to filling up the disk. My main client has been running on 16 GB with no problems, but I'm going up to 18 GB. I actually don't completely remember how I wound up with 18, but I already have an Ubuntu-upgraded EBS block stored, so I'm committed in terms of the time I spent on the upgrade. I moved it to the smallest "compute" type instance (that I list way below) to do the upgrade, and it still took over an hour, as best I remember. It may have been 1.5 or maybe 2 hours.

I immediately came across one bug in my testing. My BitCoin price widget died in PHP 8.1 with " file_get_contents(): SSL operation failed with code 1. OpenSSL Error messages: error:0A000126:SSL routines::unexpected eof while reading ..." There is a fix moving down the PHP pipeline, but I fixed it my using cURL instead of file_get. See the specific version of my BTC widget. To confuse the issue a bit, I had another problem to solve, too.

For the record, here is the GitHub listing of the OpenSSL problem. Last I checked, this appeared to be fixed in php-8.1.7RC1 dated May 24. I am current for Ubuntu, and I am still on 8.1.2.

So here is a checklist of items to do / test / consider as I continue my testing, before making the new 18GB Ubuntu 22.04 block live.

  • rsync the local version of my DOCUMENT_ROOT tree
  • git pull /opt/kwynn from my general utilities
  • test my clock. Usage seems to be down, but people still use that, I think.
  • test chrony and the accuracy of the clock
  • test the redirect / rewrite for my resume
  • route a temporary URL with SSL cert
  • add that URL to my email check widget so I can test that
  • test my own other "favorites"
  • test the numerology calculator, I suppose, but that is very unlikely to break (It's all client-side.)

I started this entry and then cut if off before its time. I'll probably come back to it.

June 6, 2022 - Mr. Zoom continued, and Mr. D. (started June 2)

Update: On July 3, I'm rewording a point several paragraphs down. I'll mark the point below.

My entry on the 31st was a rant on various problems I have with potential clients. I did feel better. It brought me some sense of completion, even if the completion is to pound deep into the ground why I should avoid certain potential clients.

I suppose the difference between my personal blog and tech blog is getting thinner, at least for the moment. Obviously I may / will alienate some people. Hopefully I can filter out nonsense and come out with a few people I can work with.

Mr. D., and also a brief word on my job / gig hunting status

Mr. D. is a different color from Mr. Zoom. In the case of Mr. Zoom, I only mean that it's a different potential problem; I have no idea of Mr. Zoom's "colors" as I use the term in the next sentence. For the record, Mr. D's color is certainly not one in the libtard (useful idiot) alliance rainbow. (This is the paragraph I clarified on July 3. Also, I meant Mr. D just below, and fixed that.)

I would not say I made a mistake by replying to Mr. D's "help wanted" ad, but I should not have found the second round surprising. Oddly, his ad was, let's just say, based on an "affinity group" more so than specific tech. It was one that I almost never see advertised. I am more-or-less a member of that group, and the ad was so novel that I replied. And I passed his affinity test.

Part of the point of this sub-entry is that I have found a number of times in life that even if people overlap on X, they don't necessarily overlap on anything else. I suppose my thoughts on Dragon*Con attendees really should be in my personal blog. I'll make a note to self to go back to their parade protester in roughly 2007 and then their sheep-like actions in 2020 and murderous actions in 2021. The short version is that when I went to the Con several times between 2002 - 2006, I almost belonged, but almost was worse than not belonging. I've had other examples.

Mr. D. wanted someone to work on one of the proprietary web IDE-ish systems. When I read the post, I didn't know exactly what the system did. I told him that, but he more generally wanted web work done, so I could probably do it. His first reply was that he found someone both in the affinity group and who worked with that IDE-ish system. We slowly went back and forth for weeks, but there was nothing actionable.

In those weeks, I drifted closer to the "real world" job hunting path than I have in a while, probably years. It takes some energy to make that shift. In the past, the results of that path were such a wretched cost-benefit that I am having to push against that history. When he wrote back with an actionable request, I was busy trying to keep the momentum on the "real world" path. For example, I finally fairly thoroughly rewrote my resume. I didn't realize that I hadn't done that in 6 - 7 years. I'd partially updated it, but not thoroughly. I've been writing to recruiters pretty much every day--that is, I'm actually answering some of them from the steady trickle that comes in even when I'm not pushing my profile / resume.

One point is that given the mental and even emotional effort that goes into maintaining that (moderate) momentum in the face of abysmal history, it's hard to turn around and react to something that doesn't appear to be any sort of steady work.

A bit (more) on the "How much?" rant: Mr D amongst other jobs does, loosely put, construction--very physical stuff. He thus may be more biased than the general client. The general client doesn't understand how difficult that question is in software. People who work with hardware--in the general sense that existed 200 years ago--probably have a yet more difficult time understanding why the question is so difficult. Imagine being asked how much to install a washing machine and then come to find that the client simply has a dirt lot with no walls, roof or pipes. That's the sort of thing that happens in software, metaphorically speaking.

In started this on June 2 and am now writing as June 5 ends. I originally planned to keep ranting on "How much?" but I think I'll move on to the rest of the rant, although I may be able to fill in the metaphor.

Mr D has what is very likely a shared hosting account on what is definitely GoDaddy. As far as I know GoDaddy was the first to effectively seize domain names. The first instance I heard of was what I think started as DailyStormer.com. GoDaddy then effectively seized the domain and sent DailyStormer on an ongoing quest to find a home. I need to figure out where they are now, but last I knew they were at DailyStormer.su. That is hysterically and grimly and tragically funny that supporters of those who tried to destroy the Soviet Union would wind up on the .su Soviet Union top level / country domain. Daily Stormer is now on a .rw Rwanda site, which is its own variant of hysterical, grim, and tragic.

I found Daily Stormer's latest address on Unz.com. Unz has Andrew Anglin's articles fairly often. Note that Ron Unz is Jewish and got 34% of the vote for Governor of the (Royal Colony of) California in 1994. Even though California specifically calls themselves a Republic and was never openly a British colony, almost no states have shown much independence from any evil forces for many decades.

Back to GoDaddy, I'm fairly sure they did the same thing to other sites. Working on their site would be irritating at best, and there are other issues.

Shared hosting is somewhat analogous to installing a washing machine with no house or pipes around it. I might elaborate on that, but the technical situation gets worse. I don't know what version of the IDE-ish system he's using, but he's exporting HTML version 3. (It's actually exporting itself as XHTML 1.0 / HTML 4.01 "transitional," but in 2022 it amounts to the same thing as HTML 3.) The HTML 4 standard came out in 1997. His site has such anachronisms as width tags--not width properties of style tags, but width tags.

So one question is whether an upgraded version of the IDE-ish will write HTML5? There is also the related point that the IDE-ish system is proprietary. Members of the affinity group should be moving quickly away from proprietary. Funding Billuminati Gates' SatanSoft contributed to Gates' Satanic Crusade, known to the shod, masked, vaxxed zombies as "Covid" (Billuminati's Satanic Crusade) sponsored by MicroSoft (SatanSoft).

Given that the IDE-ish is proprietary, I have little interest in finding out what it will export. I would be embarrassed to do any work on such a site without upgrading it to HTML5, and that might have to be done by hand. Yeah, I'm sure there are various off-the-shelf ways to easily automate, and I could automate myself, but the point is that this getting to be a lot of work.

Come to think of it, I am looking at one of his sites, but I may not be looking at the relevant one. I suppose I should have asked that. I will continue the rant, though, because the site I'm looking at represents a common type of problem. Presumably he found someone willing to upload HTML3, as embarrassing as that is. Presumably there exist people who will deal with a shared host and obviously relatively few people care about GoDaddy's actions.

There are so many projects on the board that I use that I really hope other people can do competently. I would need to be convinced that I'm the last hope.

I should also mention that he wanted me to call. When I wrote him, I hadn't researched the host, IDE-ish, and ancient HTML and such. The call was presumably about what he wanted done aside from the other considerations. Once I'd answered my own questions about host, shared host, and such, I'd imagine I could understand what he wants in a few written sentences. As I have written to some degree before, I have found that I should not deal with people who are in too big of a hurry to get on the phone. It's me, not them.

What he wants done would be easy on a real host (VPS / VM) and starting from "real" HTML(5). I have a pretty good idea what he wants. But it doesn't seem like enough work to go through the trouble of cranking up a new relationship. As I wrote in the previous entry, I am not cut out to freelance. Or at least I am not cut out to scrape for many tiny projects.

He was also a bit snippy that it took me a week to write him back. In one sense, that is understandable, but it's a bad start. I have explained at great length my reasons for delay. I am not asking him to care about my reasons; he should not care. This is a rant. I'm ranting about how this situation and similar situations look to me.

more on video calls / Mr. Zoom

My previous entry was in part my ranting about video calls for software dev interviews. I forgot one of my key points, though. Corporatia seems obsessed with their DIE WHITEY (DIE - Diversity, Inclusion, Equity) departments, also their DEIty (Satan) or an effective IED on Western Civilization, although there is nothing improvised about it. A lavishly funded psy war going back at least since 1914 is not improvised. From memory, 1914 was when the word "racism" was coined. Before that, it was assumed that the races were different. "Racism" was coined to push equality in the communist sense of the term--whites give according to their ability and others receive according to their need until everyone is equally poor, and / or everyone is racially diluted and equally poor. The people at the top of pushing the agenda do not want diversity; they want dilution such that there will be no more whites, given that whites have opposed them

Non-white candidates have surely gleefully added their color to the appropriate application / profile checkbox, so that's no reason to see them.

And if libtards want to maintain the fiction that they believe all colors should be treated equally, why do they need to see potential software developers onscreen?

For that matter, even by voice they are almost always indicators, but writing should be colorblind.

And then there is the whole notion that it doesn't matter more generally what people look like. Being obese to the point of taking 20 years off one's lifespan is a valid lifestyle choice. So why do they need to see people?

I suppose I've partially hinted at the answer to my question. They want to see the person to push against bias or apply bias as needed. My point is still that if they wanted colorblindness, stick to writing. Of course, they don't want colorblindness, they want "affirmative action."

May 30, 2022 - processing a potential client

Update on July 4

There was an intermediate person between myself and Mr. Zoom; I'll call the intermediate Mr. Mid (in the middle). I sent Mr. Mid this entry as soon as I wrote it. He responded quickly in part but not in full until a few days ago. He says this is worth sending to Mr. Zoom, which is interesting.

I add this point based on Mr. Mid's feedback. I'm not going to re-read my entry for several reasons, so I might be repeating myself. Below I rant about how job recruiters tend to ignore my necessary and very unusual night owl request. Mr. Mid pointed out that Mr. Zoom may have simply made an oversight. Of course that is true and perhaps likely. I believe I emphasized that I'm not trying to beat up on Mr. Zoom. As in noted in future entries, part of the point was to rant / vent and thus hopefully feel better, and, as I recall, it worked.

back to the original post

"Processing" as in processing their emails in my inbox, for good or ill. I suppose this can be in my tech blog because this is about the acquisition of clients, or miserable failure to do so.

I'll back up a bit. I have known for years that I'm not cut out to be a solo freelancer. I don't have the people skills. As I remember, the whole freelancing thing started out relatively well, but then I got hammered with a series of "divorced, beheaded, died, ..." After being bitten a number of times, my situation has been, to a degree, a downward spiral--a positive feedback loop. I'm afraid I'm going to get bitten, and it's self-fulfilling. (Yes, the engineering term positive feedback loop is correct.)

The "real job" path resulted in more serious bites. The major reason I attempt to freelance is that it's 10:10pm (and now 11:30), and I'm early in my work "day." The latest attempt to solve the night owl issue in the "real world" went very badly. I figure I was operating at 30% capacity, but the joke is that my boss wanted to keep me. However, I was so exhausted that I wasn't thinking clearly. Also, I had been set up to fail by the recruiter. He's supposed to be the people person, so I took his advice. I was then so angry at him that I wanted to make sure he didn't get paid, so I quit. I can't remember how much of that story I've posted, and I want to limit digressions right now, so I'll stop that thread for now.

Mr. Zoom

Given that Mr. Zoom's latest email has been there for months, I'll start with him. I am not assuming he'll ever read this, and our brief attempt at doing work together may be long over. That's fine. If you do read it, though, I am not trying to dump on you, Mr. Zoom, personally. One of the purposes of this is to try to deal with frustrations that have been building for years. You just happen to be the one when I decided to rant.

He's Mr. Zoom because he requested a Zoom call.

To some degree this is inspired by one of the Cluetrain Manifesto guys, the roughly year 2000 version. He mentioned that he wrote a really irreverent blog that got relatively popular. He said that it was much better marketing than all of the traditional marketing techniques. My attempt is probably going to go way beyond irreverent. For that and other reasons, this may not help in either the short term or long term. It seems the thing to do, though. This may be the really angry side of "Speak your truth," but so be it. Put another way, the commandment of the hidden door to Moria was "Speak friend, and enter." Maybe I'll find the password to something more appealing than an orc and Balrog infested ruin. (Yes, yes, I know that the proper translation was "Say the word 'friend' [in Sindarin], and enter.")

In my previous email to Mr. Zoom, I said, "I used to call myself the King of the Night Owls; now I'm only the Knave..." (KOTNO) Part of what's held me up is that you (Mr. Zoom) didn't specifically address that, so I have little reason to think we're going to get anywhere.

In roughly 2015 a DARPA Lifelog (Facebook) recruiter wanted to schedule a 20 minute call, and I actually took him up on it. I have trouble remembering the time when I was that willing to talk to people on the phone without vetting them in writing first. The call lasted 7 minutes. Before he could start his song and dance, I explained KOTNO, and told him that I didn't care if he was representing the all-powerful Lifelog; I still needed a solution to KOTNO. (Little did I know how much more power they would exert in the direction of murderous evil.) I gave him a few potential solutions to KOTNO, and he said that he'd look into them. To his great credit, he sent me a nice note a few days later that convinced me that he did try to some degree, but he didn't see a solution.

Somewhere between bragging and bemoaning the ridiculousness of it all, I got 2 contacts from 2 different Amazon Web Services (AWS) recruiters on May 9 and May 17. Yes, Amazon has shown their tendencies to the Dark Side for several years now. I think I've addressed them already (somewhere on my site if not this blog). The short version right now is that I am getting desperate enough. I decided to engage the second AWS recruiter because she took a different approach than usual. That her name was female probably helped. She has me convinced that if I ace their 2 hour test, I can probably negotiate what I want. Once she sends me a test link, I have a week to start the test. I'm not quite ready for that, but that's another discussion. Hopefully I'll ask for the link soon and take the test.

I've probably mentioned this, too (perhaps on this blog), but in 2010 I got through 2 phone interviews with Amazon Web Services and got flown to Seattle. I was doubly doomed by KOTNO issues, though. I'm not saying I would have gotten the job minus those issues, but KOTNO issues didn't help. Yet another story for later, if I don't find I've already told it.

Then to add to the parade, a Big Evil Goo recruiter contacted me on May 24. I did write her back, but I haven't heard back. Goo would be a lot harder to stomach than Amazon, but I might be desperate enough to pursue it if she ever writes back. She wanted to do the 20 minute call, and I told her the exact same story as above--there is little point until I have some hope of solving KOTNO. I'm not claiming I have great chances even if I did pursue it, but it is vaguely flattering to get the attention. It is also of course frustrating.

Sorry for the digression. The point being with Mr. Zoom that it helps if you specifically address KOTNO. It seems silly to schedule an "event" and then have it shut down in 7 minutes. I've been too worn down by dealing with various job / gig hunting issues to be able to shuck it off. My mindset on the matter has apparently deteriorated since I was so willing to talk to Lifelog guy.

Before addressing (the application) Zoom specifically, I'm just going to let it all out when it comes to video calls. Again, I'm not trying to dump on you (Mr. Zoom) personally. This has been building for years.

Actually, I need to back up before video calls and talk about audio-only calls first. In the end it's likely I'll talk to a potential client. I have learned over and over and over, though, in at least 2 contexts other than business, that I am not compatible with people who are in too much of a hurry to get on the phone (let alone video). I talked to one potential client for 7 hours in one go. It turned out to be deceptive, not clarifying. I thought I knew him and liked him, but there were issues that made the call irrelevant. One short version is that the same reason we talked for 7 hours turned out to be a huge impediment when it came to reviewing what I'd written. He was driving the country and thus couldn't get on either a phone or a computer very often. Also, I thought I had his favor. What I didn't realize is just how impatient he was. I thought I had time to develop something for the long-term. He needed something that just barely worked, immediately, as it turned out. There were other incompatibilities, but the point is that the call was a huge waste of time that led to a much bigger waste of time.

A related reason why I seek writing compatibility is that if I get very stressed, you'll get one long email from me, and if you can't at least partially address it in writing, I'm probably done with the project. (That needs some elaboration, but perhaps some other time.)

It might sound like I've had nothing but disaster for my entire freelancing era. That's not true. I've had a number of very successful projects, and some of those were fairly big or very big. But the net is that financially I haven't quite made freelancing work. I've had a handful of disasters that led to that positive feedback loop. A "disaster" almost always means I spent a lot of time and got paid almost nothing. Usually I didn't even seek payment. In most cases the situation was very likely salvageable, but there are a number of relatively small issues that I have been unable to recover from. Again, I am not cut out for this.

Back to my issue with the phone. Writing software is writing. Written skill is what's at issue.

Then there is the fact that software developers are notoriously "shy" for lack of a better word. I would say the better the developer the more "shy," and vice versa. I can think of a very small handful of people who can do both people and software. Getting on the phone without enough context is hard. "Context" in this case being, "Yes, I might be able to deal with KOTNO." There is other such context, too. I'll get to that.

Now I'll bleed from audio to video calls. Software development is not improvisational acting. It should not involve makeup, or whatever the male equivalent is. (I am male, for the record, but I'm thinking about it from a woman's point of view, too.) Dev is not a real-time activity of any sort. Driving is real-time, let alone flying. Real-time meaning that you must deal with inputs fairly quickly, like turning the steering wheel to go around a curve (or stay straight). Amazon is much closer to the right idea. Otherwise put, how is an interview of any sort relevant to dev'ing? At least until context is established. In Amazon's case, context is that one passed the test.

Update: in my June 2 entry I add a few points on video calls.

Now to the Zoom application specifically. For one, Zoom is proprietary and a standalone download (last I checked). Another bit of context after KOTNO is that I am an open source fanatic. Usage of Zoom does not bode well for that. I refer to one big relevant company as SatanSoft (Billuminati Gates' company) and the other as the iCult.

But it gets far worse. I had never heard of Zoom before Billuminati Gates' Satanic Crusade, known to the shod, masked, vaxxed zombies as "Covid." So Zoom would have to take specific steps not to be labeled as a "Covid" profiteer, or conspiring to promote "Covid." If their statement was something to the effect of "These people promoting 'Covid' are mass-murdering Satanists, but we at Zoom are not yet prepared to call for revolution. We will try to help you survive their tyranny in our own little way." That would be something. However, you can sometimes see it briefly on their homepage, and then it stays in the HTML source as the innerHTML of several h1 tags; their site says "In this together." That's rather infuriating to put it mildly.

Otherwise put, I want to see them totally financially destroyed. And it is possible that some of their officers are knowingly lying to promote "Covid," which is of course conspiracy to commit mass murder. Of course, if they are not knowingly lying, they are somewhere between stupid and naive. If they are that stupid, how are they officers? (Oh yeah, they might be affirmative action hires.)

I think that will do for Mr. Zoom.

Update: I continued my ranting on video calls in the next entry above, dated June 6.

LG T-Mobile (MetroPCS) "Unfortunately, LG IMS has stopped" - May 23, 2022, 20:07 (updated after 20:24)

This started showing up Sunday, May 22 around 10am my time. I didn't encounter it until 5pm or so yesterday.

I finally fixed this with several variations on what's out there. I have an LG Aristo 2 with Android 8.1.0. I tried the instructions of clearing data, cache and force stopping 1 of the 2 apps I list below. After several attempts, that didn't work. Below is what I did that wasn't in the instructions:

  • I cleared the cache, data (for 1 of the 2), and force stopped BOTH "LG IMS" AND com.lge.ims.rcsprovider (I cleared this com.lge... first). There are separate entries for each app. (The "LG IMS" app will not let you clear data, but that did not matter. 3 dot menu / hamburger menu "Show system" lets you see the system apps. Clearing cache and data is under "Storage" for each app.)
  • I had the SIM card out (removed) and was in airplane mode one of the times when I did the clearing / force stop step. Then I'm pretty sure I turned the phone off, re-inserted the SIM card, turned it on, got the error again, did the above step again, and then pulled the SIM card out with the phone on to force a reboot (see below). Then the error went away. After roughly 1 minute to confirm the error was gone, I think I turned the phone back off, then put the SIM card in, then turned it on again. I made a call to check phone service, 4G LTE shows connected, and I have not seen the error in roughly 30 minutes, which almost certainly means it's fixed.
  • For the final reboot, I removed the SIM card while the phone was on and in airplane mode. Within roughly 15 seconds, Android gave me a message about no SIM card and then rebooted itself. Previous reboots had failed to fix the problem, so forcing it to reboot due to no SIM card might have been the difference.
  • I took the battery out for about 40 seconds before the previous-to-working boot. That is unlikely to have made a difference, given that it was a previous boot, but I've heard of that solving such problems in the past.
  • From the time I put the battery back in until I confirmed the fix worked, I'm pretty sure the phone was in airplane mode.

Updates, 20:24

I just noticed that some people are saying it fixed itself after a reboot, so it is possible I didn't do anything useful except reboot 7+ times. On the other hand, I am reasonably sure nothing updated itself. At a glance, though, it's hard to figure out timestamps on Reddit, so I'm not sure when that was.

For sake of documentation, I started from this post on "Android Police." Also, here are a handful of Reddit posts: 1. Megathread, 2. "Unfortunate" thread, 3. wake up call thread.

To repeat my "apology" in the 17th Century sense of the term, the only reason I linked here in my Reddit post was that when I tried to copy and paste my pre-written instructions (as above), my text kept doubling and trippling in the Reddit HTML box.

April 27, 2022 (00:07)

I have started an Ethereum / NFT project.

April 11, 2022 (01:14)

It's been an interesting few weeks. There is a lot to report, but not sure how much I'll get out now. My immediate purpose was to record for myself and whoever finds this the details on the MongoDB 5.x CPU requirements. For Intel, it needs the "Sandy Bridge" (micro) architecture. WikiP tells us this came out in 2011 with the Core i3.

It would appear a lot of people didn't get to the bottom of this. Fortunately, I figured it out fairly quickly and reverted to 4.x. Here are some of the error messages I see, in case Big Evil Goo picks it up, and this helps someone. I probably figured it out because I'm well aware I have ancient hardware. It's about 2009 vintage. It was a $3 - $5k computer at the time, I'd imagine. I got it for something like $160 in 2017.

(core=dumped, signal=ILL)
status=4/ILL
core-dump
core-dumped
Illegal instruction (core dumped)
kernel: traps: mongod trap invalid opcode
mongod.service: Control process exited, code=dumped status=4
mongod.service: Failed with result 'core-dump'.
dumped core

March 18, 2022

web server access log analysis - README redone

In answer to "usual" (recent CS-grad) apprentice, I rewrote the README of my web logs repo.

create a git branch

the right way

I found the note file where I saved the commands, but it took me way too long to figure out what I had branched. "grep -R branch | grep -P "0\.5"" It was my cms. Note that the comments from further below on "master" versus "main" and label versus branch apply.

git branch 0.5
git checkout 0.5
git add -A .
git commit -m "trying to create branch 0.5"
git push --set-upstream origin 0.5
git checkout master
the long way / sort of wrong way / correcting mistakes

The following were the actual commands where I went around in a circle. It did the job, but it's not the perfect commands. I'm "saving" this here because I'm removing it from the README (see link below) . Note there is ambiguity between the words "main" branch and "master" branch. The libtard speech Stasi deemed "master" inappropriate. Git seems to be in the middle of transitioning both its gender and use of those words. I don't remember which is the right word as of that version or the current version. Here is the link to the branch I created. In hindsight, a label rather than a branch may have been more apt.

Note on branching:

The key is to create the branch and THEN to change the branch.  I think you have to commit it.  Make sure it shows up in origin / on github.  

git branch 0.32
git checkout 0.32
git add -A .
git commit -m "trying again to create branch"
git push --set-upstream origin 0.32
git checkout main
git add -A .
git commit -m "removing all from mai[n] temporarily"
git checkout 2a7231bda956def5e205e910062b7f3f4b23c046 cli/t1.php
git checkout 2a7231bda956def5e205e910062b7f3f4b23c046 README.md
git checkout 2a7231bda956def5e205e910062b7f3f4b23c046 parse.php
git add -A .
git commit -m "new main or master branch"
git push

March 17, 2022 - web log introduction for "sales guy" apprentice (started ca 22:40, updated: 3/18 17:03)

Note that on 3/18 00:15 and possible later, I am adding stuff at various points. It's not always the furthest down is the newest. At 17:03 I added one last note just below, and now I will almost certainly close the entry.

One more note, out of order. The line below shows a fetch / "GET" of my apprentice page. I should not assume that's obvious.

Below is a line from my web server access log. I'm going to separate it into 2 lines for page aesthetic purposes.

66.249.70.62 - - [17/Mar/2022:16:33:55 -0400] 689777 "GET /t/9/02/apprentice_steps.html HTTP/1.1" 200 3646 "-" 
"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 

This is what I've been nattering on about and built a vast edifice to process.

The lines represent every page and other object query to a site. That is, the fetch of the .html page, all of the JavaScript files individually, all images, all external css, etc. That particular line is from the Big Evil Goo Bot. That's how Goo builds its search data: its robot queries pages, runs them through the wrongthink filter, and then maybe indexes the page for the search engine, if the page doesn't threaten various of their sacred cows.

The first field--66.249.70.62--is the IP address of the robot--where the robot is "calling from." I picked a line from the Goo Bot so that I could reveal the IP address. I feel much less than zero moral obligation to protect the privacy of their IP addresses. If you run "whois 66.249.70.62" without quotes from the Linux command line, after perhaps installing whois, you'll see that Google owns that IP address. There are also web services for whois.

The next 2 fields "- -" are for if a user is logged in with Apache's ancient password method that very few use anymore. At least, one of the "-" is for that. (Apache doc.) As I think about it, I should probably remove those from future logs, but not now. Anyhow... They are always "- -" in my case, which is why I should consider removing them.

Next is the time with GMT / UTC offset, set to my local time. Next is where I added microseconds. My EXACT log format is displayed on GitHub. The "%u" is where I add microseconds. This goes to my obsession with unique index methods. I was trying to uniquely define lines, but it didn't work. There are HTTP "408" response code lines that can happen in the same microsecond, or at least are logged as the same microsecond, so it doesn't work.

the "ancient" Apache passwords

Sales guy apprentice (SGA) asked about the ancient Apache login method. I call it ancient because I read about it and used it briefly years and years ago, and I would guess it's much older than when I used it. For reference, here is the password feature in Apache's doc.

SGA asked for clarification. Their password feature allowed you to create a password in a file on the same server as the web server. When you say "their side," Apache as an entity was not involved. That's not one of the "sides." It's "their side" in the sense that the password file was on the same side (server-side) as the web server.

I think you can integrate their password into a database, but I'd imagine that was / is clumsy. That's one reason it wasn't / isn't used. Another is that I have no idea to what degree they kept up with password hashing. The PHP language has built in more and more sophisticated hashing, for example. I don't remember what the permission issues were around the password file, which is another drawback. I could probably think of more. I'll just say that putting user data in a database is just how it's been done for decades, and this is one instance where I have no objection to "how it's done."

back to log files

Back to the log file, I have debated removing microseconds because they don't do what I want, but, then again, there is very likely information to derive from the microseconds, so I'll keep them.

The next field is "GET /t/9/02/apprentice_steps.html HTTP/1.1" GET is one of a handful of "HTTP request methods" or actions or verbs. I have not crunched the data on it, but I believe it's safe to say that GET is by far the most common. GET is what the browser (or bot) uses to get a page in most cases. Then "HTTP/1.1" is the HTTP version used. I suspect that all my requests right now are HTTP/1.1. HTTP 2.0 exists but is early in its support, last I checked. HTTP 2 goes to a binary rather than human-readable text format, which is surprising. I am not excited about a binary format, so I am in no hurry to adopt. As of now I see no technical pressure to adopt.

200 is the HTTP response code, such as listed / linked above. 200 is the famous "200 OK" It is probably / hopefully the most common response. Although it would be interesting to crunch numbers on that. It probably is the most common, but there are quite a few hack attempts that result in "404 not found." The 404s might be 5 - 10%, as I think about it. I call it famous because I know someone with a bumper sticker "200 OK," so it must be famous, right? To elaborate a bit, the 200 in this case means that the page exists and is accessible / has proper permissions, and it was served up successfully. (Successful in that Apache sent it. It doesn't mean it was received, although it likely was.)

3646 is the number of bytes returned. When I come back, I'll compare that to the HTML on the disk. "$ ls -l /[document_root]/t/9/02/apprentice_steps.html" shows 7984. If do control-shift-I (captial I / India) in a browser (Firefox and others) and go to the "Network" tab and refresh a given page, you'll often note in the "Response Headers" an entry "Content-Encoding gzip" Gzip is a compression algorithm like the old WinZip / .zip. So the 7984 compresses down to 3646. I don't remember if that number includes the size of the headers.

"-" is the referrer, or the site that referred the browser to that page. For example, sometimes I see "https://www.google.com/" which means someone found the page from Goo Search. Sometimes I'll see referrals from DARPA LifeLog (Big Evil Facebook) or Goo's "Tube." There are also internal referrals such as JavaScript pages being "referred" from the HTML page.

"Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" is the now-infamous-to-me "User Agent." "User Agent" is an odd term. It means the program that sent the request. In this case, it's the Goo Bot. I list lots and lots of user agents elsewhere.

return to Apache's passwords

SGA is curious, so, here we go. "$ htpasswd -c /var/www_ex/apache_pwd_ex sga" creates the password, with user sga. I type the password as I create the file entry. The super-secret password is "password123" without quotes. The "protected" directory / file is here. I'm following the instructions as linked above, and I'll link again. I created www_ex for this purpose. You want to create a file outside of the Apache file tree (DOCROOT / document root) because you don't want Apache to serve the password file. Apache (www-data user) needs permission to read the file, though.

March 5, 2022, starting 13:04, winding down (maybe) at 16:26, with many long breaks in between, maybe posting 19:06

"PHP Fatal error: Uncaught Error: Call to undefined function mysql_connect() in /...path/[implied index.php]" You are using PHP 8.x, right? Or at least 7.x? Have they already aliased the 1990s "mysql_connect()" to be mysqli_connnect()? (MySQL goes back to '95.) The PHP documentation does not indicate that. In any event, I would use mysqli_connect(). You'll give old-timers like me hives just looking at the thought of the old mysql_connect(). Note that the "old" mysql_connect() is gone, per the doc.

I ranted about this several weeks ago. CERTAIN CLIENTS who sent me screaming were still using the original mysql... functions in PHP 5.x. The original mysql functions were deprecated in roughly 2013 and removed from the language in 2015. (The years are from memory.)

apt-cache policy php-mysql
php-mysql:
  Installed: 2:8.0+82~0build1

This leads me to believe that mysql_connect is an alias, but, still, for my peace of mind, if nothing else, consider using mysqli. If you want to do some research and see where it's documented as an alias, I'd probably live with it, but, given that the documentation still shows deprecation, I would avoid the mysql functions.

With all that said, I just got a reply that he's using mysqli now. Good.

Now, back to the error itself. I'm not sure when (or if) it became relatively clear to me that it meant you had to install something separately. I've had variants of this drive my nuts fairly recently, though. I'll come back to that.

For future reference, consider the difference between the mysqli doc and substr() (or string functions generally) doc. The mysqli doc natters on at some length. That's a hint that something needs installing. I would probably not recommend reading all that nattering. Given that you're reading the general PHP doc, it may not be helpful towards the simple question of "How do I install this in Ubuntu?" Also, there are all sorts of references to "--with-mysqli" and related switches that I have never needed to worry about because I'm not compiling PHP from scratch. When you install php-mysql, it does the same thing as the switch.

It is harder than it should be to know which package to install in Ubuntu. Big Evil Goo usually answers it easily enough, but it's the sort of thing that may be quicker when I'm around to show you what I have installed.

By contrast, the string functions say, "There is no installation needed to use these functions; they are part of the PHP core."

One variant that drove me nuts was when one of the MYSQL(I) constants was not defined. I didn't know if that was a PHP package thing or a Stoopal thing. It turned out that I hadn't installed mysqli. That was not obvious to me.

A worse variant is when the rewrite engine isn't enabled. The manglement (sic) systems should go out of their way to check that it's working. I wrote code to do it fairly quickly, so they can. Otherwise, you get the damnest errors. I might have spent 3 hours fussing with one of them until I finally realized from relatively subtle signs, using the debugger, that the lack of rewrite engine was the cause. These issues are some of the many issues I have with manglement.

On an apprentice procedural note, my theoretical standard for something like the error above is to spend 20 minutes thumping on it. Then ask me. It's not worth that sort of frustration. I'm not sure you learn anything from it. I've said it before, but it's worth repeating: I remember spending 3 hours in 1992 trying to figure out SYNTAX errors. It would have been lovely to have someone point them out. I don't think I learned much from all that. I just got very frustrated and probably would have gotten out of dev if I weren't so damn stubborn. I think I've told the story here of someone who did quit dev in 1985 because his compiler simply said "syntax error" with no line number or hints or specific syntax violations.

In 1992, I sort of kind of learned to break code into smaller and smaller pieces--isolate the problem more and more tightly--to figure out where the problem is. It would have been nice to have someone state that in the context of my problem as a general rule. I sort of learned it, but I'm not sure it really stuck for a while. I think I've complained before of people who post to StackOverflow with 100 lines of code. Their problem is within 3 lines, and they should have isolated that before asking the question. That applies to StackOverflow, though. In the case of asking me, I'll go with the 20 minute rule for now. And maybe that should be 10 - 15 minutes.

To recent CS grad apprentice, do you want me to link to your blog? I'll do it if you want, but... I suppose I can ask this openly because it goes for any apprentices. You may have noticed that I'm getting more and more, shall we say, testy, on this website. I'm getting closer to the point where I say publicly what I might have only said privately in the past. I stand the risk of eventually being deplatformed or condemned by the ADL--those being about the same thing. I am at risk of being put in the same category as Putin and Foreign Minister Lavrov are right now. So, do you want to be literally linked to (or from) me? I would not be at all offended if you said "no."

In other news, CS-g apprentice wore a hoodie to a recent job interview, and even "swore a little bit." He is doing an experiment in being one's self. Maybe there is hope for me, but maybe not. He's not a fan of Putin and Lavrov. I'm not trying to be insulting, but I doubt he has consciously heard of Lavrov. Indeed, Lavrov has likely been somewhat obscure until the last few weeks. I know of him because friends who keep much better track than I do have been a fan of Lavrov's for many years. I didn't know until just now that Lavrov has been FM since 2004, so I've heard about him for quite a while.

Russian military fandom aside, I suppose my equivalent would be barefoot for an interview. Decades ago I heard a reliable report of an office in Orlando being commonly barefoot. Actually, I would be wearing my 2009 Holy Name Cadets Drum and Bugle Corps hoodie and barefoot during a light snow.

Anyhow, where were we?

You mentioned PHP and (client-side) JavaScript and loosely implied that learning them was in conflict, or that server-side (PHP) code had much higher priority. Many weeks ago I talked about the potential for 4 layers of code filtering from the database to the front-end. PHP and client-side JavaScript are part of the same team to process web data. Sometimes it makes sense to do most of the processing on the server-side and sometimes on the client side, and sometimes it isn't clear which is better.

Things are processed on server side (whatever the code / language is), and then the result is then sent over to the client. The client doesn't see any code, just the output. You'd think i would already have known that but man it didn't click until a couple days ago.

Now, i'm guessing JS is processed on client side?

Correct on both points. More specifically, client-side JavaScript, which is what you're talking about in context, is processed in the browser. That's why you see it debugged in the browser's dev tools, and you can't see the JS execute on the server-side.

Which leads me to Node.js. Node.js is server side JavaScript. Just in the last few weeks it's finally started to fully sink in WHY the MEAN stack goes together. It's somewhat of an equivalent to your "revelation" above, but I don't have an excuse for not having my revelation much sooner. An apprentice guideline, to repeat, is that Kwynn is not all-knowing, and I suppose I can sometimes be kinda dumb.

Actually, let's leave Angular and Express aside. Angular is a client-side JavaScript library, so the JS part matters. MongoDB's equivalent of SQL is JSON (BSON) formatted. The distinction of what exactly it is--JSON for BSON--isn't at issue for this limited discussion. I'll just call it JSON for now. In any event, queries including data entry are JSON formatted. A query result is essentially a JS object with some portion of the JS language to operate upon it. I don't yet know the extent of JS in Mongo objects.

One point being that I've been doing all sorts of machinations to harmonize PHP and Mongo. My life might be easier if I just learned Node. To this end, I installed a Node debugger early this morning. Let us all repeat in chorus, Kwynn's Rule of Dev #1: never dev without a debugger.

I spent 2 - 3 hours researching various debuggers, re-installed Eclipse, and tried getting Node to work in Eclipse. So there's Rule #1, and then there is my dedication to open source tech. So, when those two come up against my disgust and perhaps hatred of SatanSoft, it seems that I go with open source and Rule #1. After resiting it, I installed the MIT / open source licensed Visual Studio Code for Node. It works. Perhaps I should set some apprentice or another on finding a non-SatanSoft product.

"You keep saying don't code (such as php) without a debugger. How important is this over say doing 'php index.php' and having it give you the errors at the terminal?" They are not equivalent. Running PHP as CLI is different than using a debugger. There is no substitute for a debugger. Just to be clear on terms, by a debugger I mean software that allows you to set breakpoints, step through code, and watch variables as you step through.

I should explain why they are different. First, I'll back up and discuss CLI PHP versus "web" PHP. Remember that those two types of PHP have separate php.ini files. Also remember that if you change the /etc/php/8.0/apache2/php.ini, you have to restart Apache for the change to take effect. Anyhow, for dev purposes you should turn on "Display_Errors" or whatever the .ini variable is for showing errors in web PHP. That is, you should see the same error in both CLI and web.

By default, display errors is off in the web version because in theory you don't want the public to see the errors, because it could give an attacker insight into how to attack your site. Depending on what you're doing, I vote for turning display on even for a public site. I am almost certain display errors is on for kwynn.com.

With that said, one of the reasons why it's helpful to run CLI on a web application is that the error may not show up in the HTML output, even if display errors is on. Sometimes you'll get a totally blank HTML page. kwutils.php changes the way errors are handled, so I'm somewhat confused on this point because I'm usually using kwutils these days. But, as I remember, you get blank when the error is upstream of the page that was GET / POSTed from the browser. I think this is because by default errors go to stderr and not stdout. The HTML display is from stdout. That is, the PHP program "crashes" before there is any stdout.

Similarly, you won't see an error sometimes (even if display errors is on) unless you view source, because the error message may not be proper HTML. There is also a setting where errors are specifically HTML formatted, but, again, I've kinda lost track of this because I handle errors differently.

Anyhow, using CLI rather than web mode to "debug" is a minor point versus using a debugger for CLI, web, or ANYTHING ELSE. But, let me finish on the CLI versus web part, first. The reasons that it's often helpful to use CLI to "debug" are something like:

As above, display errors might be off (although you should turn it on), the error might not otherwise show in the HTML (stderr vs. stdout), or it might be embedded within HTML in a way that doesn't display. There are other reasons. Even if you're debugging, you're going back and forth to the browser, and just the browser being involved at all makes things slightly more complicated. I'll try to think about this. My guideline on this weeks ago was that it's often best to dev in CLI mode until you're ready to bring it into HTML. And it's often best to separate all the processing until you combine the HTML and PHP.

On a related point, don't forget /var/log/apache2/error.log or whatever you may have overridden error.log to. Sometimes very helpful PHP errors and warnings will show up in error.log. And, on a related point, when you turn display errors on, also turn on display E_ALL types of errors / warnings / notices. Occasionally you'll have to hide notices because they pop up in places that are not useful, but I've had to do that 3 - 4 times in many years.

So back to debugging with a debugger. In some cases seeing an error message is all you need. The debugger comes in for situations were it's not working but there are no messages. In other words, a debugger helps solve your logic errors more so than PHP language errors. I think you once asked about echo / print / console.log. Your code will get cluttered with more and more of such if you go that route. Then there is the problem of your code never getting to the output points. I had that happen the hard way when I was trying to debug Ruby without a debugger. I was getting a distorted picture of what was happening because my biggest problems bypassed my "print" (or whatever it is in Ruby) entirely. In a debugger, you're running the code line by line and know EXACTLY what is happening and where in the code.

In other news, I hope I can declare my XOR processing as good as it's going to get in any reasonable amount of dev time. One tentative conclusion is that mongo CLI / mongosh is not particularly efficient at output. Although it's not apples to apples because I was running that output as a "$ mongo" shell command, and running any command through shell_exec() or the equivalent is relatively slow. In any event, I went back to queries being done from PHP and outputted downstream from proc_open(). With 12 hyperthreads / "cores" (6 real cores X 2 for hyperthreading), my XOR processing takes about the same time as one core does to XOR the raw file. Although that is also not apples because I have an old computer versus Amazon's spiffy new ones.

I also did some buffering both to limit the number of cursor queries and the number of fwrite()s. It appears that my entire CPU capacity pegs for somewhere around one second. Perhaps that's the best one can get, as opposed to the CPUs waiting on RAM or disk. There were some indexes that help a lot. Of course those same indexes slow down inserts. I need to whittle down my indexes and try to find that balance.

The Mongo .explain() is helpful, although I'm not used to how it works versus relational. It was making some odd index choices that I had to curb. I simply deleted the index it was using and added another one that improved the situation dramatically.

Back to the email series. As I said, messing around with data types in advance in a database table is one reason I switched to MongoDB. In Mo, you simply chuck an array or object into the database.

The number in VARCHAR(30) is the maximum number of characters--in this case, 30--the field can hold. The number is up to you. How many bytes are in each character is a separate issue. In Mongo, you don't have to worry about this stuff. What is UNICODE up to now? Is it up to 8 bytes? Once upon a time, a character was a byte. The modern issue with VARCHAR and related data types is that one byte doesn't remotely cover all the characters in all the languages on earth. You need a set for Cyrillic, Mandarin, Hindi, Arabic, etc. You can restrict your database to the "original" characters, though.

This issue might have cost me a lot of money. I was once offered a 30 hour project, with possible extension. I was loaned a laptop with a working MySQL or MariaDB. It was enough years ago that it might have been either one of them. My KVM connectors and such were buried at the time. (I'm not sure if I could get at them now or not.) A laptop keyboard is almost useless to me. I couldn't get the database to load on my desktop. Given the nature of the restore program, the specific error was not easy to figure. I finally found out that it had to do with UNICODE. The other dev had left an email address as 255 characters. It was something that did not have to be anything close to 255 chars. He had an index on that field. His database was set to Latin-whatever-number-it-is or some such. By default, my database was set to multi-byte UNICODE, and I think it was 4 bytes at the time. The problem was that MySQL or MariaDB didn't allow an index on a field above something like 768 bytes. Not chars, but bytes. So it wasn't allowing the table creation with unique index. Now that I think about it, I didn't see the error because the import did not terminate. It just kept going after failing to load the test email addresses. It took me a while to set things up to see the error properly.

I had already been questioning whether I wanted to use relational again, and I wasn't sure I wanted to deal with Laravel, either. By the time I got the db loaded, I decided to walk away from the project. If he had given my access to the data file and not loaned me a laptop, I probably would have solved it. I kept going back and forth from desktop to laptop, though, and everything about the laptop was painfully slow. One problem was I didn't have a good place to put the laptop. It was a weird combination of events. It's one of those "What ifs?"

March 3, 2022 - commutative hashes / XOR by line (begin 18:45)

To quote Colonel Hannibal, "I love it when a plan comes together." (Although Captain Reynolds naked in the desert--"Yeah. That went well."--that might be yet better.) I suppose, while I'm at it, I should mourn the loss of General Hannibal of Carthage. The world would likely be a better place if Carthage had won, but I'm not sure they stood a chance under the circumstances. My vague memory is the world might have been a better place if Carthage had committed to total war. Hmmm.. There is discussion on the "What If?" and it is likely infinitely more honest than equivalent situations in the last 100 years. Might make for interesting reading. But I digress.

So I have been musing over this issue of quickly validating my web server access log database versus the source file(s). One problem is that all the hashes I can quickly find are linear in the sense of a VHS tape--they are only useful with one long string. They can't be parallelized. So I got the notion of XORing each line and then XORing the lines together. That is parallelizable. (That may not be a word, historically speaking. It is now.) I started in PHP. Right this moment, I don't remember why I went to C. At first I was concerned about 64 bit unsigned versus signed. That may have been it. PHP doesn't have an unsigned. Then I, dare I say, "circled back" and used signed in C. (I use "circle back" in mockery.)

In any event, I now have XOR working as planned in both C and PHP. (The code was in GitHub before I started writing this.) C and PHP match both forwards ("$ cat") and backwards ("$ tac"). This heavily implies they will match with the lines in any order--thus, parallelizable. C is 100+ times faster than PHP. I'm not sure I've ever known that the situation was quite that "bad." Our hardware is just stunning these days to create a situation such that I don't know that.

March 2, 2022 - .php versus .html (start 23:03)

My apprentice is about to bring MariaDB (MySQL fork) data into a web page. That is, he's going to try. I'm sure he'll get it fairly soon, but that's the sort of thing that doesn't go smoothly the first time you ever, ever do it. With that said, he may have accomplished that particular "Hello, world!" already.

His page was index.html. He couldn't get any PHP to work inside a .html file, which is not surprising. By default, Apache won't execute PHP inside a .html file. In almost every case I've seen, PHP runs from a .php file. I think there are a small number of extensions that will execute PHP. I *think* .tphp or .phpt will work, where that's a PHP template, but I'm pushing my knowledge / memory. Remember that included (required) files are already in the PHP context or they won't work, so you could call an include file anything, although I can think of very few reasons not to stick with .php. That is, an include file is only meaningful if it's coming from a PHP context, so there must be a .php file somewhere down that call stack. Similarly, if PHP is file_get[ting]_contents() a file, it can be called anything.

So the short answer is change it to .php. I have never done anything different. You asked about speed. I'll address that further below. There are some interesting issues around that, but they are all very, very minor relative to what you care about right now.

He asked whether it is possible to run PHP from .html. I'm fairly sure you can override something in Apache, but I don't think I've ever seen it done.

With that said, some sites suppress .php. I just tested and found that one of the systems that shall not be named will let you do example.com/index.php . I find that the most "popular" system that shall not be named will not. ("WordPest" is not strong enough.) It will redirect (301) /index.php to / I think the redirect is done within the bowels of the application and not in the Apache .htaccess, but I'm not entirely sure because I'm not fluent in rewrite rules. To the extent I understand them, I put it at a 70% chance that it's done in the "bowels."

I'll call the "popular" one WordPlague. Word pest, word pestilence, Word Plague, or simply The Plague. That works. I'll call the other one Stoopal because it will cause your development skills to stoop until you're dependent on the cane / crutch of it. You'll never be able to walk properly or run after too much use.

In any event, as best I understand without digging too much, often you don't see ".php" because all of the URLs are written with the assumption of a "single page" application and / or other redirects / rewrites. In both the cases of The Plague and Stoopal, most HTTP requests are rewritten and sent through index.php. index.php in turn starts a series of includes and conditional includes that processes the request. This is a case where looking at the superglobals in a debugger would explain a lot, but I'm not sure I can bring myself to care much about the content manglement (sic) systems.

You asked about a speed difference. That is not worth a second of thought or hesitation. I will delay renaming .html to .php until I'm sure using PHP directly in the file is the right answer. That's a matter of clarity, though. It doesn't make sense to use .php unless you're specifically using PHP.

Months ago, I might have leaned towards .php to leave myself the flexibility, but I'm getting better at rewrite rules, so I'm not as worried about that. In fact, I did a rewrite from .html to .php recently. It's very profound, isn't it? Note that using .htaccess files in various ways requires a specific "AllowOverride" in the virtual host's .conf file. All that stuff is in my sysadmin repo. If the machine in question is yours, I don't see a problem with very loose AllowOverride. The potential problems come with shared hosting where you are the host.

To do a fair test of a speed difference, you would want to run precisely the same content with .php and .html. You can set your /var/log/apache2/access.log to show microseconds. I've done it, as should not be surprising. The example is in my sysadmin repo. If you call a page twice using the same method, it would give you an idea. Or call it 1,000 times.

I should add that I've given some thought to turning microseconds back off. I was hoping they would provide a unique line identifier, but they do not. A 408 HTTP response (request timeout) will show up twice in the same µs. As best I understand, a 408 is Apache not-so-politely telling a browser to shove off and disconnect. That's all deep inside Apache. I'd have to dig to start to understand the context. In any event, those two lines are completely identical including the microsecond.

In any event, I'm not sure you would detect a time difference. If you did, it would be way too small for human detection. It's just not a consideration. There may be situations where you want to load some of your data with AJAX after the page has loaded, but that's a mostly separate issue. You're asking about very similar files as .php versus .html. You're also asking about tiny amounts of data pulling from only slightly larger data into a page. I see no problem pulling it directly from a speed perspective.

I have never tried to detect those sorts of speed differences and have not worried about that. There is an interesting consideration, though, between .html and .php that has been in the back of my mind for a very long time. When Apache serves an .html file, the Date-Modified in the HTTP reply header is the date of the file modification, and the ETag is based on the contents. That makes it easy for Apache to serve up a 304 response. 304 means that the browser sent an "If-Modified-Since" and / or "If-None-Match" the ETag. The browser is reporting what it has in its cache. Apache serves the document if it has changed or else send back "304 Not Modified.

Contrariwise, if the exact same content were in a .php versus a .html, I don't think the .php has an ETag automatically, and the Date-Modified will be "now." If you want to generate those properly and serve "If-None-Match" and such properly, you have to do it yourself in PHP. I have dug at that a little bit in kwutils.php, but I haven't fleshed it all out.

If it's a borderline case between .html and .php, I consider it obnoxious to leave off those features. It's been one of my hesitations in creeping towards a "single page" website. I see the merit in single page. That is not one of my issues with manglement (sic) systems.

more immediate issues

I'll try to wind this down. I'll remind you again that there are whole sections of this blog--perhaps 70%, perhaps more--that I don't expect to make sense right now. I'm recording thoughts for the long term. I'm not sure there is a point in reading it over and over. Perhaps once every 6 - 10 weeks at the rate you're going. Your rate is fine; I just don't want to waste your time. Perhaps we should make an interactive application to help you track where you are with each subject.

As for your comment, "I feel like it's gonna start coming to me really fast, really soon." It has been my guess that there are parts of the learning curve that are exponential or perhaps x^3. But then you are pretty much guaranteed to be banging your head on various new issues, unless you ask for help (hint hint). Yes, I think quite a few things will start to make sense soon.

March 1, 2022 - continuing from yesterday (starting 21:07)

The credit card processing is now a very few steps from live. One big accomplishment is that I have for the first time used Apache virtual hosts exactly how they were meant to be used. I've used the "VirtualHost" tag a zillion times, but I've never set the same machine (and IP address) to serve two domains. I've never had a reason to before. It's sort-of-kind-of as easy as Apache's very first example. I give some "gotchas" below. One criticism I would make of their example is that www. should work via the "*" DNS A record. There should be no mention of www anywhere in DNS or Apache config or anywhere else. That's so 1990s. Oh wait... It's not that simple:

I would about to say that www.kwynn.com works just fine, but given my recent http to https redirect, it does not work just fine. I have the RedirectRule set to preserve the www or anything else. The routing works fine, but given that "www.kwynn.com" does not exactly match "kwynn.com," the security cert process rejects it. I have a solution for that!

/etc/apache2/sites-available/000-default.conf  
# old version: 
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,NE]
# New version: 
RewriteRule ^ https://kwynn.com%{REQUEST_URI} [L,NE]
# end change

sudo apachectl configtest
# Syntax OK
sudo apachectl graceful

Then it works. Ok, with that said, I repeat my statement. "www" is so 1990s. STOP USING IT! STOP REFERENCING IT!

With that said, my client's site still has multiple references to www. I vaguely remember this coming up years ago. At the time, I was too inexperienced and /or chicken to deal with it severely. Right this moment, I don't want to confuse issues. I want the credit card code to be fully implemented. Then one day I will deal with extreme prejudice with "www."

I started the process of VirtualHost "B" by copying a .conf file. I removed the security references to the exact site, but it turns out that leaving the "Include /etc/letsencrypt/options-ssl-apache.conf" was a bad idea when I was just setting up site B. It passed configtest, but it brought down both sites when I activated it and restarted Apache. So no references to security until you get farther down the path.

So, to get VirtualHosts working for both sites A and B, start with a stripped-down .conf file for the new site B. The following worked. Note that before you run certbot, the new site B has to work with http (see gotcha below). That is, the "certbot" process has to be able to access a non-secure version to verify ownership. By "working" that means in part you have your DNS A and / or AAAA records pointed to the right IP address.

ubuntu@ip...:/etc/apache2/sites-available$ more payt1.conf
# the following should be the result once created / edited:

<VirtualHost *:80>

        DocumentRoot  /home/ubuntu/payments_www
        <Directory /home/ubuntu/payments_www>
                DirectoryIndex index.php index.html
                Require all granted
                AllowOverride All
        </Directory>

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

ServerName payments.example.com
ServerAdmin buessresume@gmail.com
</VirtualHost>
# end file

sudo a2ensite payt1.conf
sudo apachectl configtest
sudo apachectl graceful
sudo certbot certonly --apache --dry-run

# for real:
sudo certbot --apache

# it's fine to let certbot do the rewrite / redirect from http to https
# that last command does the a2ensite and restarts apache

After that, there is one last annoyance. It's not really a "Gotcha!" but it acts like one, and you could do a lot of wheel spinning if you don't realize what's happening. Now that I think about it, I'm pretty sure the only annoyance comes if you are testing the non-secure http when any https sites are involved. Firefox and presumably other browsers will be assuming a previous 301 permanent redirect from a given IP address to a URL. If you try to go to the non-secure site B and maybe the secure site B, you'll get redirected to the original site A. The solution may or may not be as simple as going directly to the https URL of the new site B. My solution was to use Brave instead of Firefox. I hadn't been to either site in a long while on Brave. After 10 - 20 minutes, the https version of site B worked fine in Firefox, but I'm not sure if that was due to some sort of timeout / expiration or because I went to the secure URL.

Next is another solution for 301 problems. Note that in the Control - Shift - I "Network" tab you can see the 301 redirect. Anyhow, another solution in Firefox: go to history, then hover over the site in question, then right click and "Forget About This Site." You may have to do that for several variants of the site. I don't think that messes with your saved password, but I'm not entirely sure, so keep that in mind.

Feb 28, 2022 - a day or a few hours in the life (starting 17:59; update starting 18:25)

Today / very soon I hope to get back to my main client's credit card processing. That is the main client whose project has always been part-time. I still need more work. I have one source that's looking promising for a bit of work, but it will only be a bit. I've also been getting nibbles for fairly big projects, but we'll see what happens.

For years my client has used the same credit card processor. (I only got involved with that side of the project very recently.) Apparently it has worked fine, so that's much of what matters. They are not a big name, though, and their documentation is not public. Non-public documentation is generally enough reason in my mind not to use a provider. In fact, I was heading towards suggesting this to my client. He created a login for me, though, and the documentation behind the login wall was quite reasonable. Their PHP example was straightforward and worked right out of the box, which is more than I can say for many such situations.

I have whittled on the example and am approaching an end to that phase.  "Whittled" means I confirmed that I can assume a US billing address and whole-dollars only (no cents).  I removed the fax number field (!?!?).  And I'm otherwise cleaning up their code.  It was plenty good enough as an example, but I want it to be cleaner.

The immediate task is to improve their final result handling.  The example simply dumps 1,000 characters of raw XML, which is not exactly comforting or useful to shod muggles.

I have never worked on the site that will use the credit card processor. The site is in the much-despised Drupal. I more-or-less flatly refused to touch Drupal anymore. What I did is took one of his pages and saved it to disk. Then I whittled out all the stuff specific to that page so that I could turn it into the payment page(s). I cleaned up the HTML to some degree, but it was deemed not worth the time to clean it completely. My plan is to put the form on a subdomain. I made my "new" page pixel-perfect such that no one will notice from the page itself that they have moved to a subdomain. Given that it's a subdomain, I can stay away from damned Drupal.

I should also mention that the example and thus working version is a different technique than my PayPal form. The PayPal form is meant to be very "simple" to implement with front-end JavaScript only. It accomplishes that, but I don't really like that format. Neither does my client. In the case I'm working on now, everything is on the back end, so I have much more and easier control. I am almost certain PayPal can do it that way, but at the time I was trying to do it the quick way. Given that their sandbox was having major problems, though, I probably would have been just as well off to do it in the back end. Even if the sanbox was having problems, the nature of the problem would have been clearer from the back end, too.

update, starting 18:25

For years my client has been using a 3rd party's form to interact with the credit card processor. When he sends people the link to the form, though, it's usually eaten as spam. I suggested linking to it from his website, but he said that it's dated anyhow, and he wants the form(s) redone. When I looked at the existing form, I was having a lot of trouble figuring out where the money went--who are these people? Obviously it's going to him, but it shouldn't be that hard to figure out from the outside. I couldn't figure out who they were, let alone find documentation. That's when I suggested using someone else. That's also when he said he doesn't like PayPal's "simple" form. And that's when he made me account that I found very useful in making progress.

Even though their lack of public doc annoys me, I was about to link to them here. Their system has worked for years, their example is quite good, and their documentation is somewhere between plenty good enough and near perfect. When I went into the login system, though, I STILL had trouble finding their public site. Someone in fact has registered a similar domain that outranks the real one on Big Evil Goo; the fake one is a God-knows-what probable mischief site. The second result is very likely the real site, but they aren't even using an SSL cert. Given that even I finally started using an SSL cert some time ago, and then I finally did "the redirect" a few weeks ago, I'm going to discriminate. (I chickened out and did the 302 redirect so far. I suppose the 301 is in order very soon, given that the 302 shows every sign of working.)

So I'm going back in part to my original position. I would imagine they will continue to work fine long enough, but I have gotten the impression from the start that it's a dying product. They are supporting current customers, but they aren't looking for more under that brand. I have some loose indication that they are associated with Wells Fargo. Maybe they got bought out and re-branded.

Feb 22, 2022, starting 18:33

The madness has been on me for several days, although both my moon page and my eyes tell me the moon is days (waning) away from full, so presumably the madness is neither lunacy nor lycanthropy. The madness meaning that I am obsessed with various tasks of my pet, personal, non-paid projects.

So my web log loader loads efficiently, and in the last few days the verification is working more and more efficiently. I decided, though, that a 2 - 3 second delay due to ssh making the connection is not good enough. I also decided against a web app because there would be all sorts of permissions mucking between the www-data Apache user and the "adm" group that can get into /var/log/apache2. In the past I solved web access with a tiny C program that uses setuid() / the setuid bit. In that case I wanted C to call a very simple application with no parameters or complexity. This would involve empowering a relatively complex program, so I decided against web access. Also, given that I'm wanting to whittle down from a few seconds, the web path would cause its own delays.

So I have created a dedicated background PHP program / process that listens on a highly secret port number (that is sitting right there on my public repo). I minimize RAM with an input buffer and proc_open() into "$ openssl md4" I am using md4 because a combination of web searching and testing demonstrated it's fastest. In other words, the program sits there already running with the log file already open waiting to md4 hash a section of the web log to compare against what's in my local database.

While I was at it, I refined my password hash interface to more easily create passwords and hashes. The socket listener authenticates the request with 0.02 seconds of password hashing. I want to spend minimal time on that, but I want it to be some time to turn cracking the password into an age-of-the-universe level task.

In short, it's pretty spiffy. I think it's done, but I haven't set it live yet. On second thought, I can probably live with 3 seconds. The app I created is a good model for future "listeners," so it was helpful and fun even if I never deploy it.

Several days ago I set the verification process to fork such that the database md4 and file md4 happen at the same time. I created a new forking class for this purpose that is more specific and thus simpler than the more general fork process in kwutils.

On my part-time, few-hours-a-week paid project, I've had a productive few days in terms of efficient coding. The timecard specification has plagued me. It seems that I now have complex logic whittled down to about 11 lines. It only took me 5 years of intermittent work to achieve that. The overall phase of that project is that I am partially rewriting the application to extract it from Drupal. A quick search says that I've talked about this to some degree, going back to last July.

Only extracting from Drupal, though, I have deemed not good enough. I spent a lot of time making sure my save-on-edit (keystroke) process was totally reliable. When I started dev'ing again, I immediately got 2 cases of false positives--a save indication when there was none. That makes a certain amount of sense given how drastically I have changed things, but I decided it wasn't good enough, so I have dug into a deeper rewrite, which of course led yet deeper. I should have vastly better code when I'm done.

Between what I call Brook's Second Law and Raymond's Corollary, plan to keep rewriting it, because you'll keep rewriting whether you plan on it or not (Brook's [First] Law). What I call the Second Law involves rewriting. What I call Raymond's Corollary is in his classic Cathedral and the Bazaar.

Feb 19, starting 18:52

Going back to my previous entry, the sequence process I had would work fine assuming that the sequence existed before multiple processes would start using it. Last night I put more code in GitHub where I trivially demonstrated the potential problem. If the sequence did not exist before a heavy multi-process load, there would be many failures.

I am usually referring to codename "recent CS grad" apprentice. Two nights ago "sales guy" apprentice read my previous entry. In answer to some of his questions, and to elaborate on this issue:

A real-world example is how to generate a sequential invoice number (1, 2, 3, ...) when there are multiple users who might hit "buy" at the same time. Some physical few bits of some computer somewhere must be dedicated to keeping track of which number the sequence is on. But it gets more complicated than that. The operation to check and then add to that number must be "atomic," or it must be a "transaction" in the database technical sense. An atomic operation is one that breaks or makes a mess if it does not complete all its steps. In this case, the operation is checking the existing number and then adding one. If one process checks and then another process checks and adds before the first one adds, then two customers will get the same number.

At some point, sitting on a toilet (under some conditions) becomes atomic. You have to finish the cleanup step, or you have a mess. In the data world, the data is either wrong or otherwise corrupted if an atomic operation fails and does not "roll back."

If you go looking, the classic example of an atomic operation / transaction is when a husband and wife with a joint account show up at two bank branches at the same time. If the timing of checking the account balance versus giving them their cash is precisely wrong, they overdraw their account. (Look it up.)

My code last night demonstrated the equivalent problem with a brand new sequence. I have a bit of code where if I only do the loop body once, the sequence can fail. If I loop twice, then one of the two iterations is guaranteed to work, and my code bears that out.

Some time perhaps I'll elaborate on this. The point for my geeky purposes is that I finally have a sequence process that will work under all reasonable conditions. As I said in my previous entry, my musing over this issue has led me to all sorts of fun and games. I might have saved myself some time if I had done last night's code roughly two years ago. Then again, I did some cool stuff along the way.

Getting back to the real-world problem, note that Amazon makes absolutely no attempt to give their customers sequential numbers starting (decades ago) with 1. A quick search shows that as of several weeks ago, Amazon does almost 19 orders per second. We are at 1.6+ billion seconds and counting into the UNIX Epoch (seconds since the start of 1970 UTC). That's a 10 digit number. Even if Amazon had done 19 orders a second going back decades before it existed, that's an 11 digit number. As of March, 2021, I have an Amazon receipt with a 17 digit order number.

The format is 3 numerical digits, then 7, then another 7. The first one might be a server number. It would make sense that the individual server would be preventing duplicates.

Feb 18, starting 01:21

I got my PayPal donation page working. For most of my work "day," Paypal's systems were working fine. Based on their system status, it seems they were having problems late on the 16th which caused me problems. Later in my day, the sandbox system died again, so I stopped messing with it. It's working fine.

Today my bigger irritant with PayPal is that it seems very difficult to correlate the "Transaction ID" with anything you can access "programmatically." The transaction ID is what both the payer and payee see, but it's hard to get at. I thought the difficulty was only because I'm using the simple client-side JavaScript button widget. Upon research, though, it's not necessarily easy to correlate with the full API.

As I investigated the API, it was hard to tell again whether PayPal was having problems or I wasn't getting my queries right. There is a transaction history function / API call, but I was either having difficulty formatting the time right, or there is a long delay until the data is available, or the sandbox was having problems.

I did some experiments with an "invoice ID." Both the payer and payee see that, too. I got that working (in the test version) in that I could create an invoice ID, but then there is the question of uniqueness. That question has been plaguing me in various forms for something like 2 years now.

So, my latest sermon on unique IDs... Most of the IDs I've been exploring lately are absurdly too long for an invoice ID; they are meant to be unique in all of time and space. I want the ID to be intelligible to my customers. So that brings me back to sequences (1, 2, 3, ...).

It was in part my uncertainty over getting sequences out of MongoDB that set me on the path towards my PHP extension to get a unique ID from time, the CPU clock's tick since boot, and the CPU (core / hyperthread) number.

So, after all this time, I decided to revisit MongoDB sequences. My fork class is easier and easier to use, so that means I can fairly easily set any task to use all my CPUs / cores / hyperthreads. So the setup is to set all the CPUs "demanding" a sequence at once, in a very tight loop. In my "code fragments" repo, I have posted the code.

MongoDB's instructions (indirectly) on sequences are somewhat vague, but now I can say with a reasonable degree of certainty that they work. MongoDB went to 500% CPU usage (6 of my 12 hyperthreads: 6 cores X 2 for hyperthreading). My 12 PHP threads divided the rest of the CPU usage. That was MongoDB doing a lot of work resolving locking issues. I demonstrated again that it takes a long time to resolve locking issues. That is, a sequence is not an option if the system is going to be banged on. However, if I asked for 950,000 sequence calls, the sequence at the end was precisely 950,000. (I started the sequence with 0 internally; the first call would get a 1.)

When I just "asked" Mongo for the sequence, that took much longer than actually writing rows with Mongo's default _id or my nanopk(). I will try to remember that I can "objectify" my array and use that directly as an _id. I'm almost certain that arrays aren't allowed. I suppose in future editions of nanopk() I should see how hard it is to return an object.

Feb 16, 2022 (starting early 17th)

I was battling PayPal's simple, few-lines-of-JavaScript button for about 5 hours. I first wrote 8 hours because it feels like that and more. Then I went back to look at a discussion I was having, so 5 hours is correct.

I'll come back to my shrieking about PayPal. I got BitCoin and Ethereum working, I hope.

As for PayPal, I was having problems in sandbox mode. I hope that sandbox mode is far more problematic than live. The problem I was having was that the popup window kept crashing / closing. There was a 500 error, but that was rather useless for diagnosis. Even when it worked, various steps could take something like 40 seconds. The solutions, in approximate order of importance, were something like:

  • Clear all cookies of all sites on the button page.
  • Hard refresh the page (shift-F5 in Firefox).
  • In Brave, take shields down.
  • It may or may not be useful to remove all cookies from the PayPal popup.
  • Make sure you accept cookies on the popup. The cookie prompt can be slow, and I suspect trying to skip ahead of it was part of my problem.
  • You'll wait. Perhaps 40 seconds for certain steps, perhaps more.

Here is my release candidate code. Hopefully I'll go live in around 16 hours.

Feb 13, 2022 (starting 18:58, PS started 19:55) - Zoom

I see an ad for someone who wants to learn a given dev language. It's not one that I've done, but I've done 8 professionally and two others without pay, so I can probably help. I tell him just that, and I ask for some source code, so that I can get a feel of the situation. He mentioned he had a bug, so I figured the place to start might be to solve the bug. Given that he's just learning, I can't imagine that the code is sensitive.

So I ask for source code, and I get back precisely, "Can we do a zoom [sic] session to discuss?" This is an example of why I shouldn't be gig hunting without help. Here is the ultra sarcastic version of my response. I have no intention of sending it to the unlikely to-be client.

I understand that the ad was for teaching, but I need to deal with the objective part first. I'd like to see some relevant snippets of the language first. Software dev is not improv; it's not live stage acting. Software dev in itself is not real time like driving. I like time to think about what I'm doing. There is simply not much to say to this person until I see some code and decide if I want to go down the path.

Furthermore, what the hell is it with Zoom? I suppose I'm going go off the "professional" reservation, but the whole world is off the reservation. I'd never heard of Zoom until Billuminati Gates' Satanic Crusade. I can't get excited about the crime of war profiteering because arguably no wars fought by the US after 1815 were legitimate. I was trying to find historical instances of executions for such, but the closest I found off hand was 17 years in prison for David H. Books of DHB Industries, who died in prison in Danbury, CT in 2016. He sold ineffective vests that were not bulletproof. (Oh, of course, the $10M party was a bat mitzvah party. See my personal blog for "the preface" on that.)

Twitter and YouTube and company are so fond of their "misinformation" disclaimers. If Zoom wanted to make the following disclaimer, that would be one thing:

After due process of law, we deem it near certain that Dr. Fauci would be convicted of multiple capital offenses and possibly executed. Until that time, however, we will try to help keep life going with our service.

If they did that, I would probably be satisfied. Off hand, I see no such statement on their home page. Surprisingly, I don't see masks, so that's something, but not enough. The point being that I see Zoom as "Covid" profiteers, and I do get excited about that.

Also, Zoom goes in a similar category to SatanSoft and Satan's Operating System even before SatanSoft was obviously funding a war against humanity. Why on earth do people install proprietary software when there are free and open source alternatives? Even if you were using Zoom 3 years ago, why would you do that?

I asked you for source code. Until I have source code and have studied it and debugged it, no, I don't want to go live to discuss, and I certainly don't want to go live on Zoom.

possible solutions

To my apprentice, no, lurking in the background won't do. As I said, I am not going live until I get what I asked for. One potential solution is that you contact him and tell him we're connected. You can call me autistic or Asperger's or anti-social or shy or even a crazy conspiracy theorist. You can call me quirky and touchy and eccentric or even an asshole. You can say that I am incapable of politely asking for source code AGAIN, so you are politely asking for source code.

Another option is you do what you did with Flutter. At least this time we have a response. You don't have to mention his response to me; the point is this guy is responding. See what you make of the language. Possibly install a debugger. Then see if he'll talk to you. You can make a similar arugment to me in that you've been at it in a number of languages for a very long time. I'm not necessarily encouraging you to take that risk, although as I said in the email to him, R looks like it's worth learning. I've responded to perhaps 5 R ads in the last year or two, but I think this is my first response.

PS - 19:55

I mentioned a checklist. In some contexts, that would be useful. In this case, I've already made my checklist. I want some blanky blank source code. That's the one item on my list.

Feb 5, 2022 (starting 22:57; posting 23:45)

That may be some of the more satisfying dev I've ever done. The code is in GitHub. For reasons that would depress this entry, I corrupted my local database of my web server access logs, going back to late October. When I reloaded the file, it had become 400MB and 1.7 million lines. My loading program ate 2 - 4 GB RAM and was swapping and was depressing to watch the "top" results of and took 5 - 10 minutes. Now it takes 6 seconds and consumes minimal memory. I am not quite comparing apples to apples, but I'll get there.

I'm not quite comparing apples to apples because the old version did the line parsing--separated the parts of a line and give them digital meaning where appropriate. The new version only chops up the file into lines and puts them in a database with enough line number information to reconstruct the line numbers later.

The old version loaded the whole file into memory, so that was one problem. The new version sets as many cores / hyperthreads as one has digging into the file at the same time. The processes are reading the file line by line, so I am not loading multiple lines into memory. More precisely, I may load parts of two lines into memory, but that's it.

Another large improvement was to abstract "insert one" into "insert many." I learned that trick over 20 years ago, and it still works. If you insert a few hundred rows in an array with one command, that is tremendously faster than inserting hundreds of rows with individual insert commands. I created an "inonebuf"--insert one with buffering--such that the user can queue individual rows, and the buffering and inserting is done for them.

I created classes to simplify forking (multi-process / multi-core) perhaps 2 years ago. Now I've put those in kwutils (in the general sense, not the file). They work splendidly as written years ago.

Feb 3, 2022 (starting 03:32)

The moon phases are now in the user's / browser's local time.

Feb 3, 2022 (starting 01:01)

In answer to my disappearance over the last several hours, the lunacy took me again. Another apprentice came online who is in another timezone. So, my fun tonight started out considering adding a timezone offset from JavaScript (the user's browser) so that he would get times in his local timezone. This led to separating calculation from data acquisition (from the Python ephemeris) and storage. That led to cleaning up the various "magic numbers" where I'm trying to make sure to always have enough days of data. I also took my advice to use AJAX: I created a data buffer so that any given user is unlikely to suffer a delay when the almanac (ephemeris) loads. The almanac takes about 0.5s to load, so after every call to the app, there is an async call that makes sure there are plenty of days in the database. The user won't see any of that unless they are using dev tools, although they might see a spin somewhat away from my HTML doc. Do browsers show any spin (spinners) under those conditions? I don't know. I'll see what it looks like over time.

The alternative is to set a cron job. I might do that.

Anyhow, when I had everything working locally, I broke the live system for roughly 15 minutes. A violation of Rule #2 burned me. Locally I'm running MongoDB 4.4. (I can't run 5.0 because my CPU is too old.) Kwynn.com is running v3.6.

The violation of Rule #2 led to my having to seriously bend Rule #1. One can run a debugger remotely, but it strikes me as a bad idea. So I had to go do what is generally a gross violation of Rule #1. My current code in GitHub still has a file_put_contents() on line 29. The error was on the current calc.php (not data) line 41. $ala[0] did not exist. I did the file_put to see the data. The algorithm assumes that the data is in earliest to latest timestamps. It would appear that locally MongoDB was sorting in the order the data was put in the database, which is what I wanted. I could not at a glance figure out what the earlier MongoDB version was doing, but it wasn't in the order I needed. So the solution was to add an explicit sort, which is what I should have done in the first place.

The code is somewhat cleaner now, but I'm not sure that whole exercise was worth it.

February 2, 2022

I have posted my working lunar calendar.

You said, "I desperately need to start adding repos to GitHub in case these clients want to actually look at it." I'm not sure anyone in that context has looked at my GitHub, despite my mentioning it a number of times. It seems the people I most wanted to look didn't respond at all. I doubt that's because they looked at my GitHub and then decided not to respond, although I can't be sure.

I have learned over decades to be very careful what I do for that sort of reason. If you are working on something that does not absolutely have to be secret, then post it publicly to GitHub. Why not? *Maybe* someone will look at it one day. It serves as a backup and version control if nothing else. The effort to go backwards in time to post stuff should be nowhere near "desperate," though.

I have found GitHub to be very motivating, and it continues to be motivating even though it doesn't seem to have served the purpose you mentioned. When I have posted to GitHub, I have "published" in the academic sense, even if there is no peer review and even if no one ever looks at it. It's on the record. I also like the idea of SatanSoft paying for it. If they want to host my code for free, I'll take them up on it.

January 31, 2022 - Earth counters Luna (02:07)

Going back to yesterday's moon entry, I have begun my counterattack against the moon. There is nothing live yet because it's all back end so far. I have the Python ephemeris working. I feed Python output (painfully) into PHP and load it into the database for better sorting and querying. I have all the data I thought I had earlier. This time I've tested it down to the minute, several weeks into the future. I have a notes file with some installation notes.

In Python, SkyField deals with something called NumPy (?) that seems to have a problem serializing a simple array. Specifically, I can't simply json.dump() it. So I preg_match_all() in PHP, which is an obnoxious way to do business. It seemed faster than decoding NumPy, though.

There is a noticeable calculation time with the almanac SkyField function; less than a second, but noticeable. That's not a complaint; I'm starting to get some notion of how complicated that calculation is. That's one reason I'm saving the result in the database.

I'll eventually write code to tell JavaScript when Python should be called again for data days enough in the future. I'm starting to think the easiest way to do asynchronous calls is not surprisingly with Asynchronous JavaScript and XML (AJAX). The alternative is to try to exec() in the background, but weird things generally happen when I do that--either directly or the debugger won't work. One day maybe I'll figure that issue out in PHP. Anyhow, when the data goes "down" to JS, the JS is "told" if it should do an AJAX query back to PHP to update the database. That's another weird way to do business, but, again, it's called async for a reason.

January 30, 2022 - when to collect from a client

This is the 3rd topic today. Should I have a separate blog for the business of technology? Mostly what I'd have to say is what not to do.

Financially and perhaps more importantly psychologically, for a project of any substance, I need some payments before it's done. The project needs to be divided into benchmarks where some aspect of it works--a proof of competence. Also, we can use an escrow service. Escrow.com charges a lot less than what I saw years ago. They have an "A" rating with BBB, last I checked. An escrow service would help, but I would still need interim payments.

Put another way, bill early and often in small increments.

I can go on about this at some length--both historically and for the future. I'll wait for your reply. How big a problem is this for you?

January 30, 2022, one of several entries today - new topic

I had never thought to check, but the "mysql" command is a symbolic link to "mariadb" Yes, you are correct, I should start referring to the mariadb command line as such. I'm sure I'm going to slip, though, and refer to "MySQL." It should be understood that I always mean MariaDB.

Did you set the MarDB password with sudo mariadb? You must have. The logic may have changed over the years, or it might not have. Apparently once you set a MarDB root user password, you have to use that password even if you're Linux root. Maybe you've come across something I haven't, but that seems it could lead to trouble. Is there an override as Linux root? You might consider unsetting the MarDB root password and using specific MarDB users for each database or several databases. I have done it both ways over time. When I set a MarDB root password, I save it in a file only accessible by Linux root in the Linux root user's home directory. I do not have a MarDB root password set on my dev machine. I created a user for "main project."

January 30, 2022 - return to the moon

part 1 - when I still thought it worked or was about to work

That would be returning to the moon in an anti-gravity ship, not that rocket silliness. To quote the Cthaeh, "Such silliness."

The moon phase UNICODE characters didn't show up that night because I hadn't gotten that far yet. It was almost certainly not a mobile issue.

In the following, I'm much more making fun of myself, not you. You asked about "hundredth millionths of a second" precision. Given that I have spent an enormous amount of time on time (sic), I will seriously address your question. I realize you probably meant the point hyperbolically.

Yes, the count from 0 to 1 is in real time. No, it's not displaying "hundredth millionths of a second" precision. It's not that precise for several reasons. For one, pop quiz: what is the refresh interval? The numbers are not blurring to the eye. I wrote the refresh interval specifically not to blur. I want to user to immediately see that the calculation is in motion, but I don't want it to be blurring. The refresh interval is 7 orders of decimal magnitude from what you said.

To continue the pop quiz, what is the unit of the numerical argument to the function in question--the "refresh" function? That's 5 orders of magnitude. If you send an input below 1 unit, will it make any difference, or will the unit be interpreted as an integer? That's a very good question. I don't know the answer.

Also, 1 represents a lunar month. I am displaying 8 digits to the right. If I have the math right, that's displaying down to increments of 25ms out of the lunar month (7 orders of magnitude), but the refresh interval is several times higher than that.

I'm pretty sure the answer is the precision is equal to the refresh interval, but I may have lost track of the question.

Then there is the question of whether I could keep track of hundreds of millionths if I wanted to. In a compiled language, I am fairly sure the raw numbers in the CPU would keep up with real time to that precision, but by the time it was displayed in any manner, it would be several orders of magnitude off. If it were a fancy display using a GPU, then it would come down to that max refresh interval. I understand those are getting faster and faster, but when do they stop and conclude the human eye can't perceive anymore? Or are they going beyond that as supplemental CPUs? I guess the latter, but I'd have to dig.

part 2 - such silliness indeed

Hopefully I am not under the influence of the Cthaeh. On one hand, he would do much worse. On the other hand, it's hard to say what the effects will be.

After mucking about way too much, I did a check against a reliable source and realized that the "lunation" of the moon is rather complicated. I had assumed it was close enough to linear only in the sense that earth days and years are constant down to an absurd number of decimal places. (There are leap seconds every few years.) I thought the lunation period was the same. Silly me. Such silliness. "...the actual time between lunations may vary from about 29.18 to about 29.93 days" (WikiP). Any time I cite the WikiP I must warn that quite a few of its entries in absolute numbers, if not relative, will get you killed if you believe them. I most certainly checked against another source.

As for your second email on the subject, hours ago: like many applications, there is the external goal that you are trying to calculate or track or assist with, and then there is the implemenation. In answer to one of your points, the "technical part" in terms of the orbital mechanics went over my head, too. Perhaps worse yet, I made a silly assumption and didn't even think about or look into the orbital mechanics. With that said, there are probably some decent tricks in my code such that once I have valid input, the processing of that input had some merit.

When and if I ever have valid input, Rob Dawson's "Moon/Planet Phases in JavaScript" code looks very interesting. It would be more realistic than my using a handful of UNICODE characters and opacity CSS.

As for getting valid input, I will hold my nose and grudgingly admit that this is one of a handful of situations where the best code is in, ..., rrrrrrr..., I don't want to type it...... Python. Not too deep in the bowels of my "code-fragments" repo, there is a Python script that uses a powerful ephemeris library.

I would likely write everything I can in JavaScript and PHP and then call Python with the exact data in the exact format that the library needs.

January 29, 2022 - howl-dev-ing at the moon

It became critical to know when I turn into a werewolf. Although, come to think of it, I didn't add that feature. Hmmm... We need a definition. One becomes a werewolf when the moon closest to full rises at one's location? That will take more work involving location--fairly easy--and location-specific moon calculations, probably not as easy, perhaps not so hard, either.

The whole disk of the moon rises, I guess? Seems that a sliver isn't enough.

It should be emphasized that the Quileutes are not werewolves. Edward pointed out to Caius that it was the middle of the day. This fact surprised Jacob. (I don't remember if that fact made the movie.) Me, read the book? Nah.

Anyhow, I worked on it for about 2 hours after you signed off. The version at that point has 7 different UNICODE moon symbols with the sky going dark at new moon and brighter towards full moon. It's only 3 columns now--date, UNICODE character on a background of varying blackness, and the 2-word description of the phase--waning crescent, waning [and close to] new, waxing crescent, etc.

Oh yes, here is the live version, and here is the snapshot of that code. Just after linking to that version, I moved the style tags back into the HTML rather than an external CSS. I'll get around to deleting the CSS eventually. Then I made a few more small fixes.

This is one main way to build HTML elements--by creating them as objects in JavaScript and appending them to the preexisting document. Note that cree() is defined in /opt/kwynn/js/utils.js. I mentioned this weeks ago in the context of various ways to handle MVC and 4 layers of code. The other way is to write the HTML directly in PHP, keeping HEREDOC string format in mind. They each have their cases and merits.

In this case, everything is (client-side) JavaScript. If you were building HTML in JS starting from a database, one method goes something like the following (all source code):

<script> <?php	echo("\t" . 'var KW_G_CHM_INIT  = ' . json_encode($KW_G_TIMEA    ) . ';' . "\n" );  </script>

A possibly better example would be when you init a larger array that the JavaScript cycles through and creates a row for each member of the array. You still init it the same way as above.

January 28, 2022 (first entry 17:14, entry 2: 17:52)

Entry 2: Note to self regarding my goaround with OAUTH2 recently: a rewrite rule would partially solve the XDEBUG flag from Google when it sends an OAUTH code. That code expires in a matter of tens of ms or less, though, so if the debugger stops processing before a certain point, the code won't work. At least, in my reading I learned that a system clock being off by 5 ms could invalidate the code. So maybe there IS a practical reason to keep good time; imagine that.

Yes, setting up an environment--databases, libraries, debugger, etc.--is painful the first few times you do it, and even then it can be painful. Helping with that is one my jobs, so it's best not to do such things while I'm asleep.

Otherwise put, expect such things to be a pain for the first several times you do them. Take notes; keep those notes in a very accessible place--perhaps publicly online or in GitHub. If you don't do it for months and months, then it can still be a pain when you forget.

When you ran sudo systemctl status mariadb , the results were somewhat puzzling. I had assumed that installing the mariadb server will install the client. Now I'm not so sure. So, for the record:

apt list --installed | grep mariadb
mariadb-client-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-client-core-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-common/impish-updates,impish-updates,impish-security,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 all [installed,automatic]
mariadb-server-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-server-core-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-server/impish-updates,impish-updates,impish-security,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 all [installed]    

When you see version numbers, you generally do not want to use those. You almost always want the latest available. The situation gets even more puzzling:

sudo apt install mariadb-server
# already latest
sudo apt install mariadb-client
# something new was installed.  Huh?

Whatever the case, does the "mysql" command exist? I realize you are going to use MySQL Workbench as a client, but you'll want that mysql command. "sudo mysql" will get you in as the root mysql user so that you can set the root password for non-root Linux users. That is, mysql (mariadb) assumes a relationship between the Linux user and the db user unless you specify otherwise. You can get into mysql as mysql-user-root with sudo, but MySQL Workbench will need a MySQL root user password because you should not be running MySQL Workbench as root; it's just bad form.

I am puzzled why your mariadb status talked aobut the "mysql" (client) program at all. My does not. It was somewhat unconnected that I suggested what was wrong with your Workbench connection.

For future reference, I'm not sure how clear the distinction is in MySQL Workbench between "server not running" and "server is running but I, Workbench" cannot connect to it. In some ideal world one of us would take the time to document that. That distinction led to some confusion over the last few days.

As I think about it, your status results were even more puzzling. This is what I get:

systemctl status mariadb
● mariadb.service - MariaDB 10.5.13 database server
     Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled)
     Active: active (running) since Fri 2022-01-28 02:50:44 EST; 14h ago
       Docs: man:mariadbd(8)
             https://mariadb.com/kb/en/library/systemd/
   Main PID: 1179 (mariadbd)
     Status: "Taking your SQL requests now..."
      Tasks: 9 (limit: 9458)
     Memory: 217.9M
        CPU: 1.779s
     CGroup: /system.slice/mariadb.service
             └─1179 /usr/sbin/mariadbd

Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] InnoDB: 10.5.13 started; log sequence number 862675097; transaction id 15437
Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] Plugin 'FEEDBACK' is disabled.
Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] Server socket created on IP: '127.0.0.1'.
Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] Reading of all Master_info entries succeeded
Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] Added new Master_info '' to hash table
Jan 28 02:50:44 ubu2110 mariadbd[1179]: 2022-01-28  2:50:44 0 [Note] /usr/sbin/mariadbd: ready for connections.
Jan 28 02:50:44 ubu2110 mariadbd[1179]: Version: '10.5.13-MariaDB-0ubuntu0.21.10.1'  socket: '/run/mysqld/mysqld.sock'  port: 3306  Ubuntu 21.10
Jan 28 02:50:44 ubu2110 systemd[1]: Started MariaDB 10.5.13 database server.
Jan 28 02:50:45 ubu2110 mariadbd[1179]: 2022-01-28  2:50:45 0 [Note] InnoDB: Buffer pool(s) load completed at 220128  2:50:45

In my case, it tells you the port is 3306. Or:

sudo netstat -tulpn | grep -i mariadb
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      1179/mariadbd  

Your router is not at issue. The router has no part in the "network" traffic inside your computer. One diagnostic would have been to run "mysql" and / or "sudo mysql"

January 27, 2022

As with a week or two ago, cue "*autistic screeching*". If I haven't said it before, I hate OAUTH2. So, for the record, if you add a redirect URL in Google Cloud Console, you have to add the URL to your client_secret file IN THE SAME ORDER!!!! Otherwise, you get '{"error":"redirect_uri_mismatch","error_description":"Bad Request"}'. Here is my debugging code.

To make matters worse, I am using debugging code in part because I could not think of a good way to conform to my rule #1. That is, given that the redirect URL is called from Google, how do you get Google to put a precise URL query in the URI such that it invokes the debugger? I tried putting the xdebug query in there, and I'm fairly sure it simply told me it was an invalid URL. I found instructions for getting Google to pass data in a "state," but not a literal, precise URL query. The debugging code shows were I use file_put_contents with FILE_APPEND. It led me close to the problem, and then it occurred to me that order might matter. At least, I'm 87% sure that's the problem. I am not going to reverse it right now just to check.

This is almost reason in itself to get away from Big Evil Goo and run my own mailserver on my own domain.

January 26, 2022 - ongoing entries, 17:15, then 19:26

debugging MariaDB connect issues

If MySQL Workbench can't find the service, the service probably isn't running. Usually a service starts when it's installed, but not always.

# no harm in doing a restart if it's already running
sudo systemctl restart mariadb
sudo systemctl status mariadb
# ...      Status: "Taking your SQL requests now..."
# That's funny.  I've never noticed that before.
ps -Af | grep -i mariadb
# 7278 is the process ID.  Yours will be different in all probability
# mysql       7278       1  0 19:27 ?        00:00:00 /usr/sbin/mariadbd
sudo netstat -tulpn | grep -i mariadb
# 3306 is the port I would expect
# tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      7278/mariadbd       

snap versus apt 2; Firefox 2

A double installation was not the problem I had. It was something much more subtle. If I may be so bold, your whole email should be entered into the record. You're anonymous for now, so I'll just do it.

I have spent well over half and hour doing all sorts of such things.

funny, the only issue i had with snap vs apt was exactly that, firefox. i just had to apt remove firefox, because somehow i had 2 copies of firefox floating around... randomly, (well probably not random at all) in certain apps when I would click a link (like discord or thunderbird) it would open in the older firefox (apt) version, and the rest in my new firefox (snap install). the only reason I could differentiate the 2, and know which one to uninstall, was when I went to help/about in firefox, it actually said it was a snap package.. so I knew to remove the apt version.

previously to that, i wasted about a half hour trying to find out how to set the default browser to load in thunderbird when a link is clicked, and sadly i didn't find any way to do it.

snap versus apt

The trend appears to be towards snap, so if there is a snap, go with it. I was playing with something once that was causing trouble as a snap versus an apt, but it wasn't that important an issue. I think it was Firefox, but I'm not even sure.

MySQL, MariaDB, and MongoDB continued

In response to your email a few hours ago... You may have misunderstood the choice I was positing. If you're using Ubuntu, MariaDB is the way to go. There are supported packages for MariaDB. MySQL was abandoned by much or most or almost all of the open source community over concerns that Oracle would corrupt it. (Oh, Oracle is a funny coincidence, huh?)

As best I remember, every site I've seen over years has moved to MariaDB, except perhaps for the insane people who were still using PHP 5.x.

Yes, I do recommend installing MySQL Workbench directly from Oracle. I don't think there is an Ubuntu package for it anymore. MySQL Workbench gives you an enormously prettier SQL command line to work with, as opposed to the pure "mysql" command line. Even though I'm using MariaDB, the commands have stayed the same, so it's still mysql. MySQL Workbench will "express" "confusion" over connecting to MariaDB, and it will warn that it's not supported, but it will work.

So, going back, the choice I was positing was not between MariaDB and MySQL. The choice was between MariaDB and MongoDB. I have probably a few hundred lines of working MongoDB code in my repos. I have 0 MySQL code on GitHub. I drank the MongoDB (object oriented database) Kool-Aid in 2017, and I've started using it whenever I have the choice. I won't be using MySQL for my main (part-time) project for much longer. As soon as I free it from Drupal, I'm done with MySQL.

MySQL is not going anywhere, though, and I'm in the minority of PHP and MongoDB users. MongoDB is much more closely associated with Node.js (MEAN / MERN stack). MySQL will have much better examples out there, too.

I said that there would probably be some grumbling on my part if you chose relational (MySQL) over OO. We'll see when and if the grumbling begins.

January 24 - 26, 2022 (20:06 - entry 2 1/24; first entry of the day drafting at 02:14 1/24, posted 1/26)

entry 2 - 20:06

Going back to the previous entry, I can now delete the .htaccess I created for this directory, because now the whole site is forced to httpS. For the record, it looked like this:

RewriteEngine On
RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,NE]           

This evening's main topic, however, is my moving towards creating a new instance for kwynn.com with the Nu HTML validator built in. I'm going to have to add Java, and that's probably enough of a disk space issue that I should up the disk space while I'm at it. I've been bordering on needed to add disk space anyhow. I have a 12 GB disk which can get around 83% full when updates are pending. My main client has a 16 GB disk, and that only ran out once when I was asleep at that particular switch for months and months. So I'll go to 16 GB.

Hmmm... AWS now has an Atlanta data center for EC2. On one hand, that's tempting. On the other hand, seems I should keep my system away from me such that any incident of the century doesn't affect both. On the gripping hand, the DC area seems somewhat more vulnerable than it has in decades.

The goal of this next instance is simply exploratory, so I'll stick with Virginia for now. It looks like t3a.nano is still my best bet. The t4 systems violate my rule #2 way below--the live system should be as close as possible to this computer I'm typing on. The t4 systems are ARM rather than x86_64.

Doesn't look like Atlanta is an option. Their instance type availability is very limited for my purposes. Boston is also limited; Boston is relatively new, also. Anyhow, a quick scan says still in Virginia with a t3a.nano at $0.0047 per hour.

To back up a bit, it's not just a matter of space. I'm concerned about just installing Java in my live environment. Java is reason enough to test the environment.

How much more expensive is gp3 ssd? Kwynn.com is currently running the security group labeled launch-wizard-3. Boy does AWS create a bunch of those by default. I will create a new key pair / PEM while I'm at it.

Upon entry into new instance, turn off cron jobs. It appears Nu needs a JDK in addition to JRE because it compiles stuff (javac). And then 512 MB of memory is not enough to compile. Time to invoke that .large EC2 type from months ago. Enable proxy and proxy_http. Open port 9999 in the security config.

My conclusion for now is that Java and / or Nu are too much of a RAM pig to put on a t3a.nano instances with 512 MB RAM. When testing, everything worked, but at one point Nu hung for something like 30 seconds. Right now on my local machine Nu / Java takes 300 MB.

entry 1 - 02:14

For kwynn.com, I finally did the server / virtual-host-level rewrite rule to redirect all http to https. Here are the running Apache config files. I may have aged a little bit just before I restarted Apache to see what would happen.

January 22 (first entry after 17:03, continuing 18:35)

Apprentice is trying again to run my chat code, as mentioned yesterday. First, I have a chuckle that "php fatal error" must seem ominous to him. On some days I probably get several fatal errors an hour while dev'ing, usually syntax errors. I'd imagine I've mentioned it before, but I remember in the C language in 1992 syntax errors could take me 3 - 4 hours to solve. Helping with such things is what I see as one of my major jobs as master dev for an apprentice. I don't think I learned all that much taking 3 - 4 hours to solve syntax errors. I probably just got that much closer to getting frustrated and changing majors.

In fact, I know someone who did that in 1985. At that point, the compiler simply said "syntax error." No line number. He'd had enough at that point. I at least had a better indication in 1992.

It's rare that syntax errors take me more than a handful of seconds to solve these days. Some of the most puzzling instances that have taken longer involved my violating my own rule #2 to the effect of your dev environment should be as close as possible to the live system. In those cases, the "Heredoc" PHP string syntax worked differently in PHP 7.4 versus 7.2. Thus, something worked on my system and gave a mysterious syntax error on the live system. Worse yet, the include / require_once() tree was such that a syntax error crashed the whole system. I only have one user of that system, and it was late enough at night that I doubt he had any intention of using it, but still.

The difference in "heredoc" was something about whitespace--the higher version of PHP allowed whitespace in places that 7.2 did not.

Anyhow, I have already emailed him back. I am not tormenting him waiting for this entry. I still don't know exactly what is wrong because I'd need to be looking at his system. However, I can give some (more) general advice and repeat what I said in email.

When I say "include," I should clarify even more than I did in email. Generally speaking in a number of dev languages, an "include" means to incorporate code. The precise keyword "include" goes back to C if not before. To make the history somewhat more complicated, includes work differently in C because C is compiled. I'll stick with PHP from here on out. If I say "include" in PHP, I am using the word in the general sense to mean incorporating code. In terms of precise syntax, I almost always mean require_once(). I would state it as a general rule that one should use "require_once()" unless there is a really good reason not to. I would guess that I use require_once in 99.7% of cases. I can think of one specific case of using include and one case that I can't quite remember the details of.

Anyhow, on to his specific problem and the general debugging methodology. Once again, in this case I'm using "debugging" more generally because a debugger may or may not help him in this case. It may not help because his situation is akin to proving a negative. He's getting an error very close to "Class 'dao_generic_3' not found in line 54."

His first problem is very likely my fault to a degree. dao_generic_3 is my creation in this file. I probably should have called it kwynn_dao_generic_3 or something to make it clear that it's my creation.

So let's back up to the more general case. If a function or class isn't found, the first question is how far from the PHP core / default it is. If the function is in the PHP core / default install, then you should never get a not found. There are a few classes of functions, though, that are not installed by default. So, step 1 in debugging is to search the function / class on php.net.

To take an example that I encounter all too recently, let's try "mysql_escape_string." The PHP doc will inform you that the function and "the entire original MySQL extension was removed in PHP 7.0.0." What part of that is unclear? This is one of several situations that wasted enormous amounts of my time. The PHP release history page informs us that PHP 7.0.0 was released on December 3, 2015. Let's just say that I came across the mysql... function fairly recently. When I spluttered incoherently about encountering this problem for the first time in YEARS, I was told "no other developers have complained about this." Yes, those would be the developers who are happily charging you $150 / hours (a real figure) to maintain an ancient environment.

Similar indications of madness were that one of the systems--perhaps the live system--had a clock that was several hours off in part because the system hadn't been rebooted in 900 days or something. A few hours later I noticed that someone had used the "date" command and reset the clock.

I don't expect other people to be insane as I am about timekeeping, but *hours* off!?!?! Having to use the "date" command?!?!?!

Anyhow, I digress. The point being that if you ever have the misfortune to see a "mysql..." function, the PHP documentation will at least tell you what's wrong. Solving your problem is a rather bigger issue, especially when the people involved are likely insane. This brings up an interesting question. Can developers diagnose insanity based on years-old functions or hours-off clocks? Perhaps so.

Now I'll give a much more reasonable example. In kwutils.php is getDOMO(). It calls DOMDocument. The PHP documentation within a few clicks mentions "This extension requires the libxml PHP extension." This situation is admittedly at least mildly painful until one gets used to such issues. The solution in this case is "sudo apt install php-xml" in Ubuntu. Hopefully Big Evil Goo gives that solution simply. I knew it from searching my install list.

A related problem I've had involves one of the MYSQLI constants, and that's MYSQLI, the current version, as opposed to MYSQL. I remember chasing my tail for quite a while because a MYSQLI constant wasn't found. The solution was as above to install mysqli. Ironcially, it's been so many years that the install is "sudo apt install php-mysql" rather than mysqli. Both Ubuntu and PHP assume that it's been so long that if you're still running the old mysql library you are insane and beyond help.

Anyhow, to continue down the path of undefined stuff, and back to apprentice's specific problem, he would hopefully find that dao_generic_3 is not defined by PHP or anything he can find. Again, this is an argument that I should have called it kwynn_dao... I should also perhaps make it more obvious more often that /opt/kwynn on my system is a clone of what I generally call "kwutils." In his case, he does know these 2 points. We just had another exchange while I'm writing this. My new advice is of general importance.

To back up a bit, though. Sometimes I do something like "cd / && sudo grep -R dao_generic_3" That would quickly result in "opt/kwynn/mongodb3.php:class dao_generic_3 {" Much could be said about that, but let's stick with this specific debugging path:

I didn't demand he recursively grep. Given that the details of the above are not obvious, I mentioned that the include execution should go from kwutils.php to mongodb3.php. The fact that it's not in his case is baffling. We're both fairly certain he has the latest of both repos of code.

Which brings me to:

minimal functionality debugging technique

I want to run around in circles waiving my hands in the air and screaming when I see people post 100 lines to a forum and want debugging help. First of all, I not criticizing my apprentice nor even the people posting to a forum. Because a main job of mine is to reduce frustration, my apprentice should be asking me sooner than later. And here we are a while later, and I still don't know what's wrong in his case. The solution is not obvious, and even if it was obvious, I don't want him spending large amounts of time on such things.

With that said, for future reference for forum posters and apprentices, the minimal technique goes something like the following. I am assuming he is using the debugger and stepping through line by line. Actually, I'll put this in GitHub. One moment please....

Here is my minimal debugging example.

January 21 (2 versions -- 16:39 and 16:56)

This is the part several minutes later. So that means you have "base" PHP / cli PHP running or you would have gotten an error of program not found. As I mention below, server.php has no output yet--that is, it has no output when it's run as PHP. I should add while I'm at it that sometimes I'll use "exit(0);" or "return;" (or "continue;") for no apparent reason. I'm not entirely sure about NetBeans 12.4 and / or the latest xdebug, but slightly earlier versions needed some code on a given line to make it a breakpoint, so I will write exit or return or continue. That's why I also created the null function kwynn(); Sometimes I would use "$x = 1;" which causes great gravy grief is one is processing weather radar data with x and y coordinates. *sigh* So THEN sometimes I'll use "$ignore=34523;"

The incident with the weather radar took me way too long to debug.

Back to server.php. No, you don't have to run server.php separately because if Apache PHP is running, the browser calls server.php and thus runs it. However, running server.php from the command line is a good idea for debugging purposes, as I mentioned a few weeks ago. Generally speaking, there is an argument to be made that web code should also run seamlessly in CLI mode because it's often easier to deal with / debug code in CLI mode. That's why I created the kwutils.php function "didCLICallMe" because I set up various files to run standalone in CLI mode.

Below is the first ~16:39 entry:

I started a web chat app. I'll point to the specific version in question. When apprentice hits send, he gets an XML parsing error, unclosed tag, in line 1, column 1, from the output of server.php. That is, he gets that in the JavaScript console. I assume that PHP is not running at all, but Apache is very likely returning the Content-Type as text/html in the return HTTP header. Apache is doing that based on the .php extension. So the browser is expecting HTML which these days should be in the form of XML. If PHP is working, the response would be completely blank because the whole server file is a PHP application with no output. I have put my debugger on line 6 and confirmed that the server is receiving data, so that's a form of output to me for the moment, to know that it's working so far. kwjssrp() in PHP and kwjss in JS are both in /opt/kwynn/js, and the relevant PHP file is included (require_once()) from kwutils.php. As a reminder to the hordes of people reading this, /opt/kwynn is a clone of my general utils repo.

Back to the browser--it sees the < opening tag of php, and the PHP tag is not closed for reasons I have gone over rather exhaustively, below. If you look at the server.php output in the browser (the HTTP response), it will almost certainly be the precise PHP code rather than executed code (which in this case would result in ''). The browser is trying to interpret this as HTML / XML. So, again, my assumption is that PHP is not running at all through Apache; Apache is just running the raw file. Given that this is your brand new install, I assume the solution is

sudo apt install libapache2-mod-php8.0
sudo systemctl restart apache2

If you haven't installed base PHP yet, I think the above will do it.

January 15, 2022

As I'm writing this, as of 18:37, apprentice reports a successful install of Ubuntu 21.10. It should be recorded as 21.10 rather than 21.1 because the Ubuntu format is YY:MM. Ubuntu releases every 6 months in April (04) and October (10). The even-numbered years in April are LTS (long term support) releases, meaning support for something like 5 years. The other 3 releases in the cycle are supported for something like 9 months, so that you have 3 months to upgrade.

As an action item, does the following work?

sudo apt update ; sudo apt dist-upgrade

When I say "work," I mainly mean do you get any indication of broken packages? You may have no packages to install. You may also get an autoremove message. There is no need to do the autoremove step, nor is there any harm. I have never had a problem with autoremove. Also, it's possible you'll get a message about another process having the lock because "unattended upgrades" may be in progress. Generally, that's fine; just let the unattended finish. Sometimes unattended dies, though, and you'll sit there forever waiting. Usually that only happens when I'm updating a system running on a USB drive plugged into USB 2.0. (I have old hardware. I do have a USB 3.1 PCI card and use it all the time, but my hardware is old enough that it won't boot off of PCI devices.)

Which brings some more commands, in this case to know when a process like an upgrade has died. One is "top". I would play with that relatively soon. It goes in the category of you should know what it usually looks like, so you know how to look for anomalies. When an upgrade is working, or any other disk-intensive process, the "wa" (processes waiting) number will be very roughly 16 - 25% as opposed to near zero.

"sudo apt install iotop" and then "sudo iotop" is similar in that it does the same thing with disk data transfer. Again, I recommend looking at what's normal.

When I start a new partition / system / install, I don't bother moving data as part of the process. I move the data from the previous partition as needed, which is a good way to keep things clean.

Ubuntu comes with backup software. I have never played with it, but it might keep track of full backups versus incremental backups. That would solve your redundancy problem. The "rsync" command does that, too. I've been using rsync for the last several weeks to upgrade kwynn.com with this local copy of the DocumentRoot tree.

Please tell me you didn't install 3rd party software? If you did, just stay on top of broken packages.

In answer to one of your points from weeks ago, I just moved the Ubuntu Symphony JavaScript to an external file. I did this in part because I have no plans to touch the code ever again, and it was sitting there in this file in my way.

written before I knew you were done with your install

As for your immediate problem, I wasn't clear on at least a couple of points. Just below I go back to immediate solutions. But, first, to clarify one point: not installing 3rd-party / proprietary software is for future reference; at the moment, that damage has already been done. For future reference: when you install Ubuntu, there are only a handful of (sets of) options as you start the install. One of them is whether you want to install 3rd party software. It asks you this about the same time as it asks whether you want to update packages as part of the install. For future reference, I recommend against installing the 3rd party software, in part because I think it caused your problem. I suspect it's caused me problems in the past, too.

But back to the immediate problem, you may be able to solve everything if you just remove those 2 packages that are causing you problems. In case it's not clear, it's possible everything you're describing traces back to that. I will reiterate the point: if you see any indications, ever, of broken packages, back up your data and then attack that problem.

In the last few hours, I booted into my partition that is broken in a similar manner as yours. I was hoping to start removing packages to definitively record how it's done. My partition is far more broken than yours, though. The NetworkManager service was apparently "masked" by the cascading failure I had weeks ago. I had to look "masked" up. It's a step beyond "disabled" in systemctl / systemd. That means I had no network. My attempt to "unmask" it deleted it rather than unmasked it. I have almost no clue what that's about. Without a network to look things up and let Ubuntu load software, I decided it was time to give it up on that partition.

"$ man apt" indicates that it may be as simple as "$ sudo apt purge <package>" or "$ sudo apt remove <package>" After messing around with that broken partition, though, I decided I am not removing anything at all on any of my partitions.

If you decide to start over, I would install 21.10, the Impish Indri; I've been running it since soon after it came out. Impish is not LTS, so the consequence of this is that support will end in roughly July of this year, so by that time you should install 22.04, which will be LTS. My jury is still out on this decision in general, but I am leaning towards keeping up with every 6-month, full version. Probably the way to go is to upgrade your dev machine to every full version and then upgrade any clients once you're satisfied that their software will work.

One of the reasons I say this is because in addition to 3rd party software, I think you were zapped by Ubuntu's deprecation of 32 bit systems (i386). I'm assuming Ubuntu deprecated them some time after 20.04 based on observation; I'm sure you can find the official doc.

"Try Ubuntu" and a permanent USB install to let you boot

A USB installer has a "Try Ubuntu" mode. I recommend always having an installer USB on hand to give you something to boot from. Note that the "Try Ubuntu" mode will not save your data, though. Any data saved to the USB only lasts until reboot. You can of course save to a standard filesystem.

I also recommend installing from one USB to another USB such that the second one has a permanent install. Then you can do work on the full install. Also, the full install can be set to boot to your SSD or hd partitions. (You can do multiple installs onto a USB just like onto any other disk.) Note that when you are trying to boot from A to B, though, A must have a common kernel with B. Thus, you need to periodically update the USB. Also, when you are in Grub (just after power-on-reset) you'll notice that there are various options. Several of those options allow you to look for a common kernel. If you don't have a common kernel, when you try to boot into a partition, you'll get a message to the effect of "cannot load."

After you update the USB, "sudo update-grub" is what syncs the USB with what is available to boot to on other disks. Note that update-grub will set the partition running grub as the default boot partition from that physical device. This can get confusing if you run grub from an SSD or hd. If you had 2 partitions on an SSD and you update-grub from one, it would then be the default, which may not be what you wanted.

As I mention this, you should look into partition numbers and disk names (/dev/sda7). You should know what your / actually is on the /dev level. The spacing of this is very precise: "mount | grep ' on / '" and / or just run mount to look at it. I can't imagine what harm you could do if you run mount as the standard user (not sudo).

data management, commands that might save you in case of trouble, and one means of backup

You may do this anyhow, but consider various degrees of separation between a filesystem needed to boot / run and large data that is slower to copy. That is, your data and running files go into at least 3 categories: the files needed to run Linux, your small data such as code you type, and then various degrees towards your large data. Consider keeping your large data in one file tree that can be isolated or better yet on another partition. It makes it easier to know what you really need to back up, and it lets you copy a whole partition and then set it up to run.

The following is not definitive in that I am not going to take the time to test it over and over. It's something to explore.

One reason I mention all this is that now that you have a running, pure partition, possibly with little of your data, you might want to make a copy and learn how to make that copy runnable. First, I highly recommend doing the copy itself from another booted system, such as an install disk running in "Try Ubuntu" mode.

I often do the copy with the GUI: "sudo nautilus" Nautilus is the "Files" manager. If you run it as root, it can copy anything. There is a recursive "cp" that does this and preserves dates and users and such, but I lose track of what all those switches / arguments are.

So, make your copy. You should probably give the copy's partition a name and partition label. Otherwise, in /media/<your user> you'll get a UUID and have to figure out which one is which. In any event, navigate in the command line to the root of the other partition, which will start as /media/user/something. Then I give you the commands that are not harmful as long as you exit the shell when you're done, or at least exit from chroot. Then I give the last command that is probably not harmful but may be confusing.

First, you might want to mark both / and both /home/user with a file indicating which copy is which such as "iam_2022_0115_18_install"

# be in the root that you want to make THE root.  That is, pwd shows the relevant root
sudo mount -t proc /proc proc/
sudo mount --rbind /sys sys/
sudo mount --rbind /dev dev/
sudo chroot .
      

At this point you will be the root user (indicated by #) and the filesystem root will be the "new" root that you want to make independently bootable. Then I *think* a "sudo update-grub" is enough to make it independently bootable. I would not do this, though, until you are sure you can boot into a working partition from USB as above.

The reason you need to set grub from the new partition is that otherwise all of its mount points are set to the old partition because it is a copy. Before grub, the chroot sets / to the new partition, then grub sets the bootloader to recognize this new partition as bootable.

NetworkManager - something I learned today.

I already knew the systemctl commands below, but I didn't know the name of the network service.

Somehow when I booted into the broken partition, my active NetworkManager unset itself to a degree as I'll explain. That is, when I booted into the active one, I didn't have any network. I had to both (re)start it and re-enable it to start on boot:

sudo systemctl start NetworkManager
systemctl enable NetworkManager
      

January 14, 2022

Ubuntu package problems and some solutions

My currently active apprentice is having package (software install) problems. These are some thoughts.

For one, when installing Ubuntu and probably other Linux distributions / "distros" / sub-brands, the 3rd party / proprietary packages / software are probably not worth it. That's one question Ubuntu asks by default upon install. I suspect that's the root of his problem. Specifically: when he runs "sudo apt install --fix-broken" these 2 packages have problems: "Errors were encountered while processing: libgl1-amdgpu-mesa-dri:amd64 libgl1-amdgpu-mesa-dri:i386"

When I run "apt list --installed | grep libgl1" I get libgl1-mesa-dri and libgl1/impish. I am reasonably sure this means he installed AMD specific drivers and I did not. I can run FlightGear reasonably well on very old hardware. I'd have to go digging for my graphics card specs, but I remember it only has 1 GB RAM which is old. Maybe the proprietary driver would help, but, as above, it's probably not worth the cost benefit, and I should get better hardware when I can.

I suspect he's also having problems because Ubuntu is pulling away from i386 (32 bit). That is an argument for installing the latest Ubuntu rather than the LTS version, but that's another discussion.

The GUI / Gnome / desktop was probably almost literally making noises about a package problem. Whether it was or wasn't, there is an argument to be made for checking by hand perhaps once a week:

sudo apt update
sudo apt dist-upgrade
       

If you get the package error message, do not ignore it. In fact, it's probably time to backup your computer. This has not happened to me often, but it has gotten out of hand perhaps 3 times in 12 years. Usually I was asking for it, but that's another discussion. (Below I list one instances of trying to downgrade PHP.)

In any event, solve the package error ASAP.

The solution to the problem is usually to remove the packages in question, and really thoroughly remove them. I can't think of a way to quickly set this up to demonstrate, so I'll have to refer ya'll to Big Evil Goo. The commands involve such terms as "purge," and then I can't remember if the usual term is delete or remove.

Note that you may have to use the dpkg command for this process, and you may have to use aptitude rather than apt, although if it's not already installed, you probably can't install anything while that error is pending.

Command list:

# a list of commands or their close equivalents for package management
# use these 2 often, perhaps once a week or more:
sudo apt update
sudo apt dist-upgrade
# the rest are not harmful but only useful in context
sudo apt install --fix-broken
# I have never used the audit option before; I was simply looking for something harmless but to remind one 
# of the dpkg command
# You may not have this package at all.  
sudo dpkg --audit libgl1-mesa-dri
apt list --installed      
# which package did x file come from:
# this is a small part of answering another question of yours
dpkg -S /usr/bin/find
# not directly relevant, but in answer to another issue
tail -n 20 /var/log/syslog       

For problems with running software as opposed to their packages, many pieces of software have a verbose mode or variously levels of verbosity. Sometimes you can shut down the systemctl version of the program and run it "by hand" with the direct command and verbose turned on.

You asked if some of these commands are the direct equivalent of what the GUI does. Well, update and dist-upgrade are the equivalent of some of it, and as you discovered, sometimes doing it by hand is an improvement because the GUI will lose track of changing IP addresses. That is, always do an update before an upgrade.

The above will not upgrade you a full version such as 21.04 to 21.10. There is of course a command line for that, too.

You mentioned digging around on various errors in the syslog. Unless the timing makes it certain it's what you need to focus on, I'm sure there is all sorts of junk in syslog that a perfectly running systems spits out. You could spend forever chasing such things.

software (un)installs versus Satan's OS (SOS) / Linux file organization

I only have a fuzzy understanding of how the files are laid out--libraries, binaries, datafiles, etc. In the immediate context, solve your package problem first. I wasn't trying to discourage you from learning more; I was just saying solve that problem first. I don't think a better understanding of the overall system will help you solve the immediate problem.

You pointed out in SatanSoft that you can delete software more easily. There are a number of reasons for that. For one, almost all software is using SOS code. In Linux, you have an ocean of free software to choose from. The Nu HTML Validator uses both Java and Python, for example. There are so many choices of how to build software that it leads to so many more packages. Open source software is build on layers and layers and layers of open source software. The dependency graph is much more complex.

For a number of reasons, SOS apps are already compiled for a known OS, and SOS takes care of any small differences in hardware. The software is already a more-or-less standalone set of files when it ships. Lots of Linux software is interpreted rather than compiled, so its dependencies are not compiled in. Also, Linux runs on such a variety of hardware that an app couldn't compile for a known hardware target anyhow.

Another way of phrasing the above is that SOS software can make assumptions about what libraries are available because those libraries come with SOS. With Linux, one can make very few assumptions, so one has to make a package dependency list which is turn invokes other dependencies.

Again, many more choices and possibilities.

Linux packages do have an uninstall. I just haven't listed the command because I don't have anything I want to uninstall that badly. Also, with Linux, how deep down the dependency tree should the uninstall go? It can't uninstall stuff that other software depends upon.

Upon thought, if you continue to have trouble, I can do LOTS of uninstalling on one of my partitions. This was weeks ago when I tried to downgrade to PHP 7.4. I used a PPA that I had success with before going forward, but going backward got out of hand quickly. I wound up with errors such as "yours" of the day, and I abandoned ship and built another partition. I can go back to that old partition and start stripping it down to get rid of stuff that doesn't work anyhow. I did NOT lose data. It would be difficult, in fact, to use data due to package issues, even if the system was unbootable. You could still get at the data with another running system whether on another partition or a USB.

several emails ago

You said that you've read some of these entries 3 times. That doesn't surprise me. In case there is doubt, I realize that a lot of this simply cannot make sense yet because you don't have the context. Hopefully in a small number of months, you can re-read these and get more out of it.

January 13, 2022 - broken HTML validator - continuing 02:38

I have the Nu HTML validator working locally. I cloned it to /opt and then this works: python3 ./checker.py all and then python3 ./checker.py run .

Days ago nattered on about the referer HTTP request header as it applied to the W3 HTML validator. Coincidentally, I just realized that the referer is essentially broken for validation purposes. It appears that in March, 2021 (version 87), Firefox started to "trim" the referer down to the domain name. It would appear that I have a lot of work to do to fix this.

I already fixed it for this page, or at least I gave it the quick-and-dirty fix. I just hard-coded the URL to kwynn.com. That won't work for a fully online test system--that is, a test system accessible from the outside.

The right solution is JavaScript populating the URI / URL on page load. But I have to add that JavaScript to 156 pages according to my grep -R. That is one of the rationales for a single page system--route all page requests through one program that adds / changes whatever needs adding / changing.

January 12, 2022, one of several entries today, starting this at 23:26

Updated 23:47 with some answers. Updated 23:54 regarding http://validator....

I'm consider several changes to my Apache config, several of which I've tested. Some notes below. I am not using newlines in a realistic way--that is, the following is not precisely correct.

  • I must fix http://validator references before doing the SSL rewrite
  • "$ sudo apachectl configtest" - keep this in mind
  • apachectl above seems to want you to define the ServerName in apache2.conf itself, in addition to elsewhere
  • This works quite nicely: <VirtualHost *:80>Include sites-available/common.conf
  • Does certbot need ServerName in sites-available if it's in Apache.conf? - YES, if you want to auto-discover. But it CAN be in the include / common file.
  • Will certbot find my email address if it's in the include file? Yes.
  • If you are already in *:80, then everything needs rewriting: RewriteEngine On RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,NE]
  • Apache Rewrite Rule config flags: L is the last rule, NE is no escape of chars in the URL because you want to pass them on as-is
  • If you're in an .htaccess file: RewriteCond %{HTTPS} !=on and perhaps ReWriteRule ^(.*)$ but I think ^ above would work fine
  • Note that the above is for a 302 (temp) rather than 301 (perm). I can do 301 later.

January 12, 2022, one of several entries today, starting 20:43, first posted 22:20

(I made small revisions at 22:38.)

And back to my web dev Q&A with my apprentice. He is the "you" to whom I'm originally speaking.

First, something I should emphasize again about sending email. Among basic features that should be simple, it is one of the harder things I've done over the years. I'm sure I've done lots of harder things, but nothing that is both such a basic feature and so hard. Part of the problem is that there are Federal and presumably many other laws around the world around spam, so the providers are very cautious.

To clarify for myself and everyone else, your point about "from scratch" was about memorizing stuff. There are basic things I've done several thousand times that I still look up in some sense of the term "look up" (see below). That's in part because PHP is inconsistent in the order of function arguments in a few places, but that's not by any means the only example.

I'm trying to stop beating on php.net and trying to grep -R my own code. I will suggest something that I have only done sporadically:

Create your own page and / or your own program that demonstrates all the things you keep looking up. Exaggerate the clarity of the variables. I tend to use very short variable names because I get tired of typing them, but I should take my own advice when I'm giving examples.

I think it'd be very funny and geeky and cool to have one bigger and bigger program that demonstrates all of the little "primitives" of coding, where I'm using the term primitives to mean the syntax, the functions, the order of the function args and what they are, snippets of doing a bunch of simple little things, etc.

Alternatively, I have downloaded all the PHP doc, but I never fully installed it. I've also considered installing it on Kwynn.com. I've also also (sic) considered installing the "Nu" HTML validator that W3 runs. Recently I noticed a reference to installing it yourself.

Other than my not understanding the memorizing key to your question, we seem to be on the same page.

What, me using uncommon words such as Quixotic? I would never do such a thing.

WINE is a mixed bag. Around 5 years ago an apprentice got Call of Duty working in Linux running on iCult hardware. I don't remember if it was perfect, but it at least worked fairly well. I've had some success with WINE, but I'd say it's still a pain. A potential apprentice recently mentioned Zorin Linux, which is based on Ubuntu. Apparently the emphasis is on making it easier for people to migrate from SatanSoft. The Zorin WikiPedia article mentions both WINE and PlayOnLinux that Zorin encourages.

HTTP headers

Regarding the "header thing" or the "DO NOT CLOSE PHP TAGS UNLESS..." thing, which I got into in the last few weeks, this issue is a special case, just like sending email is a special case. The email thing you came up with on your own, in that you brought it to me. The tag thing I'm shoving a screeching at you because it has caused me enormous damage. For one, it's a special case because it goes in the "hear me now and believe me [and understand me] later" category. It's probably not worth taking the time to reproduce the dreaded "...cannot be changed after headers have already been sent" error. With that said, I'll give some explanation.

Below is a relevant example. I have removed a number of lines to make it smaller.

$ curl -i https://kwynn.com/robots.txt
HTTP/1.1 200 OK
Date: Thu, 13 Jan 2022 02:11:22 GMT
Server: Apache/2.4.41 (Ubuntu)
Last-Modified: Sun, 04 Oct 2020 04:13:01 GMT
ETag: "195-5b0d094ed72a3"
Content-Length: 405
Content-Type: text/plain

User-agent: *
Disallow: /t/8/01/wx/wx.php
Sitemap: http://kwynn.com/t/1/01/sitemap.xml
Sitemap: https://kwynn.com/t/20/10/sitemap_begin_2020.xml

When your PHP runs in "web mode," it is usually responding to an HTTP GET or POST, which I'll demonstrate more specifically in a moment. It must respond to an HTTP request with a result specified by the protocol (HTTP). It's doing some of that behind the scenes. PHP *MUST* generate headers for the browser to accept the result. The curl command above generates a GET, and that is the result including the headers, minus a few lines. I'm not sure which of the above are absolutely required in an HTTP response, but, whatever the case, the browser is expecting a number of lines and a number of types of lines and then a double newline. Everything after the double newline is the body which is often the HTML itself. If you output anything, even accidentally, from PHP before the proper headers go out, you may break your page.

If that's all there were to the "cannot be changed" issue, it wouldn't be so bad. But, believe me, at least years ago the results were unpredictable to the point that it seemed like a virus. (A real computer virus that does damage to data, as opposed to fake viruses that do nothing to people.) At that time, I was just starting to do hard-core PHP dev, and I could not figure out what was going on. (I think I already was using a debugger.) I'm sure I Googled, but I guess it still took me a while to figure out what was going on.

I was going to show you what I thought was a more "pure" example of an HTTP GET, but it doesn't work quite like I expected. I think that's because I'm not sending a proper HTTP request packet, and my brief attempts to do so didn't get me anywhere. But it hopefully the follwoing gives you more insight. Note that I'm removing parts of HTML tags because it seems that the validator doesn't like CDATA, or maybe CDATA has been deprecated.

$ telnet kwynn.com 80
Trying 2600:1f18:23ab:9500:acc1:69c5:2674:8c03...
Connected to kwynn.com.
Escape character is '^]'.
GET /
!DOCTYPE html
html lang="en"
head
[...]
titleKwynn's website/title
[...]
/body
/html

You can see the request and response headers in control-shift-I network, and there are other ways to get at it. Note that if you ever parse a raw HTTP response, all of the newlines are old-fashioned \r\n rather than just \n in Linux. This becomes important if you try to separate header and body. I parse it with "$hba = explode("\r\n\r\n", $rin);" on line 34.

As for your having no experience with PHP and "header()," hopefully I've shown that this is a much wider question than PHP. Everything on the web uses headers.

As for the header() PHP function, that lets you add headers when needed. I hope you have realized that all headers must come before the main body of the output. :) Reasons I've had to use headers are:

  • "Content-Type: application/json" - You need to tell the recipient it's not HTML.
  • You output a lot of headers if you're doing a direct download such as PHP reading an audio file and then allowing a download, or downloading a PDF. I'd have to go digging for those examples, or I'm sure you'd find them.
  • If I ever decide to rewrite all URLs to a "single page" system like WordPress does, I should output my own modified dates and etags because PHP assumes the output is dated "now," when the date should be when the content changed.
  • header('Location: https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI']); -- that's one way of doing a redirect, in this case to HTTPS from HTTP, although this is not the best modern method for forcing HTTPS.
  • You have to use headers in cases of cross-scripting issues / CORS / Cross-origin resource sharing. Browser security says the browser can only accept data from one domain unless you output headers telling it that cross-origins are OK.

Hopefully that gives you a lot more under the hood.

kwas()

In your kwas() example, you'll want a line before exit(0); that does echo('OK' . "\n"); I assume it's not outputting anything because you got sendmail installed, and sendmail is accepting emails for processing and thus mail() returns true. (As I said at great length below, that in itself won't get you much closer to sending an email, but anyhow...)

kwas() is about checking your assumptions ALL THE TIME. In your example, you didn't output anything in the event of success.

You did incorporate kwutils into your code just fine. You just didn't do anything to output in the event of success.

There is more to say on this, but I'll wait until you have more examples.

Here is an example of one consequence of require_once'ing kwutils. First without then with:

<?php
$a = [];
$b = $a['a'];
echo('got here' . "\n");
// RESULT:
[stderr in red:] PHP Warning:  Undefined array key "a" in /home/k/sm20/frag/kwuex.php on line 4
got here

<php // new program
require_once('/opt/kwynn/kwutils.php');
// then same 3 lines as above
// RESULT:
ERROR: kwuex.php LINE: 4 - Undefined array key "a" /home/k/sm20/kwuex.php

When I change the error handler in kwutils.php, warnings become fatal errors. I am certain this is the right answer in the real world. Just about all of my own code uses it. I haven't been able to fully prove it's the right answer in my main paid project because Drupal throws warnings right, left, and center. But the new version is most certainly going to take the position of forcing fatal errors. I made this change in response to annoyance at Drupal throwing warnings and continuing execution.

There is as always more to say, but that will do for now.

January 12, 2022, posted at 20:33 (started 18:11) - installing an HTTPS SSL cert locally

This is my second entry started today and the 3rd entry that either started or bled into today.

You have to have a domain for an SSL cert. For local dev purposes, I highly recommend using a subdomain (testinglocal.example.com) even if you're not using the base domain. One reason is that when in development, you change things so much that you might go over certbot's limits on certs. It's something like 5 certs a week for a given fully qualified domain. Thus, if you're using a subdomain, you can just use another subdomain (testinglocal2.example.com) rather than losing access to the base domain for several days. This isn't theory. I went over the limit several months ago. It snuck up on me.

As I muck around with my internet service provider's modem / router, I'm finding that my local system does not have a 32 bit IPv4 identity. This is important for firewall reasons. So, let's see if this works: "$ ifconfig | grep global" That results in 3 global IPv6 addresses. The first one didn't seem to work, then I added a second. Then as I ran curl from kwynn.com (external to my local system) and found which address it was using, I went back down to only one IPv6 address in the DNS AAAA record for the subdomain. You register DNS records with your domain name registrar. I use Hover.

My router's ping / icmp settings were somewhat confusing. The setting "Drop incoming ICMP Echo requests to Device LAN Address" had to be turned off for the "Global Unicast IPv6 Address" of the router itself to respond to ping. In order to ping my local system, "Reflexive ACL" had to be turned off. That needs to stay off through the certbot verification because the process needs a system passively listening on port 80.

Turn off any active local sites except the one in question. That is, disable them in Apache then restart. Below, "irrelevant" is the site / virtual host defined in /etc/apache2/sites-available/irrelevant.conf

sudo a2dissite irrelevant
sudo systemctl reload apache2

I set the one active virtual host to simply be receiving *:80: <VirtualHost *:80> Putting IP address in the VirtualHost top line did not work--certbot did not find anything listening, even though curl worked. You also need to set the ServerName to the fully qualified domain. Don't forget to restart Apache.

Note that the ufw default settings were not causing a ping problem. As for getting ready for the certbot: "$ sudo ufw allow http" and "$ sudo ufw allow https"

Certbot home, and relevant-to-me instructions.

First, do the "dry run": "$ sudo certbot -v certonly --apache --dry-run" Then do the real thing: "sudo certbot --apache"

Once SSL is working, you can put an entry like this in /etc/hosts:

127.0.0.1   blah.example.com

Once you do that, you can reverse all the security and even the AAAA DNS record because the site is now self-contained. Note that you have to open things up again to renew the security cert before 90 days.

If needed, set your firewall back to "Reflexive ACL" on. Then "$ sudo ufw status numbered ". Assuming you want to delete all the rules, "sudo ufw delete 1" to delete all the rules until they're all gone. Delete the AAAA record.

For cleanup later, when you're done with the cert: $ sudo certbot delete --cert-name blah.example.com

January 12, 2022 (AM) - the concept of "from scratch"

Note that my January 11 entry on email continued into today, January 12. This is a new entry started on Jan 12.

Yet again I'm continuing my web dev Q&A, usually answering emails from one apprentice. He is the "you" when I use "you."

You asked about Googling HTML and CSS and "from scratch." I fear you may have seriously misunderstood what I mean by "from scratch."

Before revisiting "from scratch," I'll separate what I do and do not recommend:

DO

  • Use Google and any other examples / sources you can get your hands on. "Program by Google" is fine as long as you eventually understand the core code you're using. Again, start anywhere / however you want / however you can / whatever makes progress. Do something--anything--to start.
  • Use others' code for specific tasks such as PHPMailer for email. You do NOT have to understand its internals at all.
  • Start with others' code as the entirety of your app, as long as you eventually understand the core logic.
  • If you see something using Bootstrap CSS, find out what Bootstrap is doing and use that specific part without installing a zillion bytes of Bootstrap

do NOT, at least at first

  • use Angular, React, Bootstrap (CSS or JS), WordPress, Drupal, or the like

When I say "from scratch," in part I mean use out-of-the-box JavaScript as opposed to a general library like React or Angular or even jQuery, at least to start. You should know how to use basic JavaScript. After gaining some understanding, perhaps in a month or two or so, then by all means experiment, probably starting with React. For now, for my own purposes, I have turned away from all of the above including React. I still want to whittle on my own JavaScript. At this rate it will be 6 months at least before I reconsider that. I put React highest on the list because I have not tried it and have heard good things. I have tried Angular and I found it to be a waste of time in the short-term. It took longer to learn the "Angular way" than it would have taken me to do it myself and even create my own portions of a library.

The same applies to using WordPress and Drupal and such in PHP. They fundamentally alter the dev landscape. Using WordPress in some cases may be a necessary evil, but you should still know how to do various things yourself.

You're going to be Googling HTML and CSS and many other things for your entire career (or at least until Google is renamed after being seized for crimes against humanity). Starting with "program by Google" is fine as long as you eventually understand enough to modify it. The long-term problem with "program by Google" is people who do that and barely make it work and then have no idea what to do when it stops working.

I need to fully distinguish PHPMailer from Angular. Angular fundamentally changes how you do things. Angular is a very general library that does a lot of stuff. Again, if you're going down that route, you should know how to do basic things in pure JavaScript first. PHPMailer does something very specific; it does not alter the entire dev landscape. You don't have to understand the internals of everything you use, or you'd never get anything done.

Perhaps another way to come at it is that Angular is an overlay with which you do general dev. It's an overlay of pure JavaScript. You should know the basics of pure JavaScript first. You should be able to implement basic logic in pure JavaScript first. PHPMailer is an overlay of SMTP, but it does a very specific task. There is no reason for you to implement anything that specific. If you implemented everything yourself, you'd never get anything done.

Another way: your current goal is to write a web form and get email notification of the entry. You should understand a basic web form and be able to modify it yourself, even if you copied the code. You should have fluid control over your core product--the web form. A web form is very general and can have many field and do many things. PHPMailer sends emails. If it "just works," great. It's not the core of what you're doing.

Ideally, "from scratch" means you typed all the core code yourself, or you copied it from yourself. You may be typing it yourself but looking every single detail up. The next nuance that is good enough is that you copied it from someone else but you come to understand it well enough to modify it. Then the next time you are copying from yourself.

Going back to the Bootstrap CSS example, one problem with importing the entire Bootstrap CSS is that it formats every p, div, li, tr, td, etc. You wind up trying to override it and otherwise fight it. I addressed this roughly 2 - 3 weeks ago below when I talked about PHP date formats. The entirety of Bootstrap.css is huge. I whittled down to what I wanted and it was a tiny fraction of the whole thing.

Another way: all of the "bad guys" above are very general libaries or systems or whatever that put a layer between your code and the fundamental code below it--whether that's PHP, JS, CSS, HMTL, or whatnot. You don't want to distort your dev environment like that until you at least know what a pure environment looks like.

January 11 - 12, 2022 - sending email "programmatically" (maybe done at 01:39 Jan 12, my time, UTC -5 / New York / Atlanta)

Continuing again with my web dev Q&A...

One lesson is that I realized below that my stack trace in the kwas() example revealed my username by way of revealing its path. I removed that part, but it's a lesson in security. Knowing my username should not matter too much, for a number of reasons, but there is no reason to reveal it, either. It's good to consider such things for cases in which it does matter.

As of the Jan 12, 00:27 version, I have reworked this somewhat since anyone last saw it. For one, I moved the section on email providers up above the details of the PHPMailer class.

My first comment goes back several entries, including an indirect reference in my Jan 9 entry: DO NOT CLOSE PHP TAGS UNLESS THE CONTEXT DEMANDS IT! I will try not to boldface and put that in red, but we'll see what happens. Just after your mail(...) function, you close the php tag. In your code snippet, there is no HTML or hint thereof, so there is no need to close the PHP tag.

Regarding the mail() function, what is the date on that code snippet? Using the mail() function has become more and more problematic over the last several years. I hope no one is posting that recently.

As for the gory details of the mail(...) function: as I read the documentation, I'm somewhat surprised that I have to read quite a bit before getting a hint as to your problem. I know what you're problem is, more or less, but I'm looking at it as if I didn't.

To take the problem in parts: this is a case where kwas() would help. Also, I mentioned that you usually want to be able to run code in CLI mode for debugging purposes. There is what happens when I use kwas() in CLI mode, and something similar would happen in "web" mode and kwas().

First, the code, then running the script, below, and here is an active link of the following: https://github.com/kwynncom/kwynn-php-general-utils

<?php
require_once('/opt/kwynn/kwutils.php'); // a clone of https://github.com/kwynncom/kwynn-php-general-utils
// kwas() is current defined on line 50, but that of course is subject to change
kwas($res = mail('bob@example.com', 'testing', 'test'), 'mail() failed - Kwynn demo 2022/01/11 21:58 EST / GMT -5');
exit(0); // I am simply emphasizing that this is the end of the code -- do NOT close the PHP tag!!!!
        

Running the script--the same thing happens in NetBeans CLI in the Output / Run window:

$ php mail.php
sh: 1: /usr/sbin/sendmail: not found
PHP Fatal error:  Uncaught Exception: mail() failed - Kwynn demo 2022/01/11 21:58 EST / GMT -5 in /opt/kwynn/kwutils.php:51
Stack trace:
#0 [...] mail.php(4): kwas()
#1 {main}
  thrown in /opt/kwynn/kwutils.php on line 51        

I'll come back to this. It occurred to me that nothing says you have to use kwutils in its entirety. There is an argument for using your own equivalent step by step as you understand the consequences. The two points that I want to emphasize are kwas() and the two functions where I change the error handling such that notices and warnings, and whatever else becomes an exception. Those two functions are my own kw_error_handler() (current line 69) and set_error_handler() which is a PHP function on line 77.

Back to the error at hand, a related technique to "kwas()" would be to note that the mail() function returned false. You'd have to assign a variable to the return value to see that, though:

<?php
$mailResult =  mail('bob@example.com', 'testing', 'test');
if (!$mailResult) die('mail() fail'); // kwas() does the same thing with less lines, vars, and chars :)     

Also, in web mode, /var/log/apache2/error.log does show the error: "sh: 1: /usr/sbin/sendmail: not found"

You mentioned PostFix. It may or may not install sendmail. I don't remember PostFix' relationship to sendmail. With email, there is both incoming and outgoing. Even if you got sendmail installed, though, then there is the matter of configuring it. I'm not sure I ever got that working right. I got incoming sort of working, years ago.

Even if you got sendmail working and the email got farther in the process, you have another big problem or several related ones. When you use sendmail, it is going to (try to) connect to the server of the domain name of the recipient as specified in the MX DNS entry of the domain name. Let's say that's gmail.com. GMail may not accept the connection at all. Years ago, sendmail would have worked fine, but then spam came along, and then SSL came along. And then domain name keys came along, and related stuff around email and validating email. Even if GMail actually accepted the email, it would send it to the recipient's spam box unless you did a LOT of work. The work would involve DKIM and whitelisting and God knows what these days.

So the mail() / sendmail path is a very steep uphill battle. I've never tried fighting it very far. I have also been bitten by this exact problem in a real, paid project. It took until roughly 2 years ago to start causing bigger and bigger problems to the point of total failure. Before that, there were spam problems.

Far be it for me to call something like getting sendmail working a Quixotic quest. I have made some motions to that effect. However, in terms of an actual real-world solution, even I have ruled it Quixotic.

I have used 3 solutions in the real world. All of them involve the PHPMailer class that I address further below. First, though, you have to decide on an email sending provider, unless you want to fight the aforementioned uphill battle.

email service provider (sending email)

As I rework this, I realize that all this is just sending email. That is your immediate problem. I'm not even going to address receiving because I'm not happy with my solutions.

I said that I have used 3 solutions, all involving PHPMailer. I do NOT recommend this first one, but I want to address it because it shows you historically how things have gone along different paths, and it gives you basic info before getting somewhat more complicated.

If you wanted to use GMail to send, there is at least one hoop to jump through even with the not recommended path. If you want to do it the 2010 way, you have to change a setting in your overarching Google account. (I thought you had to specifically turn on SMTP, but perhaps not. I am probably thinking of IMAP and not SMTP.) You have to set your overarching Google account's security allow "Less secure app access." With this option, you would do that to avoid the infamous OAUTH2. I'll leave it at that short sketch because I don't recommend it anyhow, for several reasons.

I used that above option in the real world until several months ago. One problem is that Google will eventually and intermittently cancel the "allow" option. It's just not a viable option anymore. The next option, which I still don't recommend, is to use GMail with the infamous OAUTH2. I started doing that a few months ago when I stopped using option 1, so I am currently doing it. There are a variety of problems using OAUTH(2), however. I'll mention it as a possible option and then skitter away from it because it's a pain. I have a specific reason for using it right now, but I'm still on the fence as to the cost-benefit. In your case, I would strongly consider option 3:

Here I will propose something that may be mildly surprising or very surprising. I like both free as in speech and beer, but in this case I'm going to recommend a paid option, although it's almost literally dirt cheap for our purposes.

Yes, it's tempting to use Big Evil Goo for free as in beer (where you and your data are the product - TANSTAAFL), but it is a pain. I would probably step you through it if you really wanted to, but it borders on Quixotic even for me.

So I use AWS' Simple Email Service (SES), even for my own, non-paid-project notifications. The cost is so low that I don't think I've actually paid a cent for it even though I use it with a paid project. The project emails ~4MB files that add to the very, very low cost calculation. The price is something like 1 cent per 100 emails or maybe even 1,000 emails plus 1 cent per 100 MB of size, or something like that.

For purposes of being thorough from a tech point of view, MailChimp Mandrill is an equivalent service. I am almost certain MailChimp has gotten into the deplatforming / censorship game, though, so I don't recommend them on that basis. I did some testing with Mandrill years ago when it was free, but I also can't recommend it beyond roughly 6 - 7 years ago because I haven't used it since.

SendGrid is another alternative. I would not say I have used it so much as I have seen it used, but that was over 3 years ago.

Getting back to AWS SES, I need to add another few steps. You create a user in the SES screen. That user includes the username and password that you'll use in PHPMailer. Note that the user you create in the SMTP screen is an IAM user, but you do NOT want to interact with that user in the IAM screen, as I further address below.

Also note that the PHPMailer username is not the IAM user with dots in it (by default). The email PHPMailer username is of the form AKIA3ZN... continuing with several more uppercase and numbers. As the instructions tell you, you only get to see the password or download it once upon creation. Otherwise you have to create a new user, which is no big deal, but just to save you frustration. Note that I have found that renewing the credentials of an SES user in the IAM screen does not work. If you want to change the password, just create a new user in the SES screen and change both the username and password. If you change just the IAM password, you get silent failure. That is, you get silence at first glance. I never even set the debugger on it to see when or if the "silence" ends. I just went back to the SES screen rather than the IAM screen.

Another small potential problem with AWS SES is that you STILL have an issue emailing to arbitrary users--yet another layer of spam protection. By default, when you start using AWS SES you are in "sandbox" mode. In sandbox mode, you send a potential recipient an email from an SES screen, and he clicks an activate link. THEN you can email that address.

The SES screens list the port number and SMTP server and SSL / TLS / whatever settings, too, and they are in my code I mention below. Once you have a username and password and approved recipient, you're getting yet closer to actually, like, SENDING AN EMAIL. Amazing, huh?

PHPMailer class and composer

All of my solutions involve the PHPMailer class. I install it with the "composer" command. "composer" itself is mildly irritating to install, as I remember. You can start with "$ sudo apt install composer" but I'm not sure it's going to work. This is one of the roughly 20% - 30% of cases where "apt" / Aptitude is either not the entire solution or the recent-enough package just doesn't exist for Ubuntu / Debian. See what happens. This is a case where I can probably help quite a bit. Yes, the solutions are of course out there, but I still remember that it was irritating.

Composer is a tool specific to PHP. It's a package management system for PHP (source code) libraries. When you install a composer library, by somewhat circuitous steps it's a series of includes / requires / require_once() that pulls the PHP source code into your own code. That means that you can debug a composer-installed library. I don't think I've had to fix a bug in a composer library, but I have debugged into several of them in order to understand a problem and / or learn about how the library works.

As an aside, I specified that composer installs libraries that are included and can be debugged. That's as opposed to a library / extension that adds native PHP functions. For example, my nano extention is a PHP extension written in C that creates a few native PHP functions. Once it's installed, you simply call "nanotime()" like any other PHP function with no include / require / require_once needed. You cannot debug nanotime() just like you can't directly debug mail() by stepping into it.

Getting back to your original problem, first you have to get composer installed. Then you need to decide where to put composer libraries. I use /opt/composer I had to create the "composer" directory. Then note that composer wants you using a standard user, NOT root or sudo. Therefore, going back to your lesson on permissions, I recommend changing "composer" to be owned by your own user and give 755 permissions (rwxr-xr-x). The world / "other" users need to be able to read and pass through. There is no security issue with reading because the composer libraries are "public" code in the same sense that the "ls" command is public.

Once you have your permissions right, do the following. In my case, it's already installed, so your results will be different:

/opt/composer$ composer require PHPMailer/PHPMailer
Using version ^6.5 for phpmailer/phpmailer
./composer.json has been updated
Running composer update phpmailer/phpmailer
Loading composer repositories with package information
Updating dependencies
Nothing to modify in lock file
Installing dependencies from lock file (including require-dev)
Nothing to install, update or remove
Generating autoload files
5 packages you are using are looking for funding.
Use the `composer fund` command to find out more!  

Unfortunately, you're not quite done with your intro-to-composer experience. After I just finished saying above that I wanted to emphasize 2 parts of kwutils, I need to add a 3rd. ("Our three main weapons are fear, surprise, ruthless efficiency...") Once again, you don't have to use my kwutils, but you need to know how to use composer. If you go grubbing (grep'ing) around in kwutils, you'll see I do it like this... Actually, this brings up an interesting question for you. If you use the whole kwutils, I think PHPMailer will "just work" once you have it installed under /opt/composer. Let's see...

<?php
require_once('/opt/kwynn/kwutils.php');
kwas(class_exists('PHPMailer\PHPMailer\PHPMailer'), 'class does not exist');
echo('OK' . "\n");

Yes, it just works. If you think that the 'PHPMailer\PHPMailer\PHPMailer' syntax is one of the weirdest things you've ever seen, I agree. It gets into PHP "namespaces." I understand the concept, but I have barely studied them and have barely attempted to ever actually use them for my own code. One of the lessons I like to convey to apprentices is that I am very far from all-knowing, even when I should be a PHP "expert."

There may be "gotchas" just with require_once'ing kwutils. Maybe you'll find out. Either way, you should still understand what's going on behind the scenes:

<?php
set_include_path(get_include_path() . PATH_SEPARATOR . '/opt/composer');
require_once('vendor/autoload.php');
if (!class_exists('PHPMailer\PHPMailer\PHPMailer')) die('class does not exist');
echo('OK' . "\n");   

That works. As for actually USING PHPMailer, that is yet another step. Isn't this fun!?! Actually, in terms of something that should be simple like sending email, this is one of the harder tasks I've had over the years. Be happy that you're learning from my experience. :)

So, with that said, here is another decision point. I have created my own email class to use PHPMailer. There are most certainly "gotchas" on that--that is, if you use my class precisely, you have to set up the credentials like I did, and there are probably other gotchas. Hopefully I give instructions. (It's been long enough that I don't remember.) And if you want to do it "your way," that's fine, too. Also, I just created a web form with email notification a few days ago. Yours does not have to be that complicated. You can just use an HTML "form" for now. I get all fussy about save-on-edit (AJAX) because it was a specification of my main client. It was a lot of work to implement such that I'm still perfecting it.

Actually, to digress again, the save-on-edit went in 2 phases (so far). For the most part, I got it working several years ago and that is still working. Months after one of my revisions, we learned the hard way that my solution lost way too much data in some cases. I never did figure out what the "cases" were; I just reconceived and rewrote part of it. This problem wasn't catastrophic but it was of course annoying. I rewrote the one field that was causing problems. Since then, it has worked to the point that my client hasn't reported any more problems. I have reason to believe that small bits of data are still being distorted, but it's obviously not critical. Obvious because nothing bad has happened in a long while.

Because I got tripped up over that, I've kept whittling on my save-on-edit technique. I will probably rework it yet again with my main client in the next few weeks, as I partially rewrite the whole application to escape from Drupal and be compliant with PHP 8.0.

Back to your email problem. As for PHPMailer, you have my examples, and there are plenty more examples out there. I'm going to try to wind this down.

ALL THAT is to say that email is no longer easy because of nearly 30 years of spam wars.

January 9

Cue Rage After Storm's "*autistic screeching*" that I address at some length in my new personal blog. Several days or perhaps a few weeks ago I address php tags and the infamous output before headers issue. Now I can quote it precisely because I encountered it again, "...cannot be changed after headers have already been sent." I'm not sure that was the exact wording I saw many years ago, but it's close, and it's the same problem. In this case, the exact quote was "ERROR: kwutils.php LINE: 201 - session_set_cookie_params(): Session cookie parameters cannot be changed after headers have already been sent /opt/kwynn/kwutils.php" For the record (again), /opt/kwynn is my clone of my general PHP utils file and repo. Note that the link is to a specific version of the file--the relevant one.

I felt like "*autistic screeching*" when I saw that. The good news is that now I know what to do.

I'm going to get lazy and stop linking stuff. You'll see changes in my GitHub in at least 2 repos in the near future. I'm writing this 1/9 at 00:26 my time. The short version is that you call a parent PHP file as the link target and then require_once() the template. The session stuff goes in the parent file before the template is called.

2022, January 5 (PM) - at least 2 entries

"side exit" from the shopping cart (16:59)

Continuing again the web dev Q&A...

The principle of "do something" includes taking "side exits." It's fine to divert from the shopping cart to do something simpler with a database. Any understanding you gain is "doing something."

MySQL became MariaDB...

...and "Istanbul was Constantinople."

I should have thought to mention this earlier. If you take the relational route, MySQL became MariaDB. For Ubuntu installation, I *think* all you need is sudo apt install mariadb-server
In case it helps, I list what I have below. The one command above should kick off the rest, though. You'll need to download MySQL Workbench directly from Oracle, though.

   apt list --installed | grep -i maria
[...]
libdbd-mariadb-perl/impish,now 1.21-1ubuntu2 amd64 [installed,automatic]
libmariadb3/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-client-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-client-core-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-common/impish-updates,impish-updates,impish-security,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 all [installed,automatic]
mariadb-server-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-server-core-10.5/impish-updates,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 amd64 [installed,automatic]
mariadb-server/impish-updates,impish-updates,impish-security,impish-security,now 1:10.5.13-0ubuntu0.21.10.1 all [installed]  

To add to the confusion, MySQL is still being developed, as far as I know, but when Oracle bought the MySQL company, the open source community forked MySQL into MariaDB. When people speak of MySQL these days, they probably mean MariaDB in most cases, or perhaps 85% of cases.

2022, January 4 - 5 (AM)

entry 2 on the 4th then into the 5th - sessions, etc. (into Jan 5 01:10)

Regarding sessions, my update to my pizza code gives an example. I'm only using a handful of function from /opt/kwynn, so you can either extract them or use my whole utility. A reminder that I addressed this at some length days ago. Some of the usage in my little functions are very hard won information.

The session ID returned by my function keeps track of one user. Behind the scenes, a cookie is going from server to client and back. Keeping track of the session is really that easy. You can just call my "start" function every time because if the sessions is already started, my function will return it. Perhaps my function needs a better name, in fact, or an alias.

The cookie goes back and forth in the HTTP headers. You can see both the headers in the network traffic and the stored session on the client in Control-Shift-I as in India.

The session ID is very helpful, but it's only a portion of the shopping cart code. You addressed some of the rest of it in your other questions. I'll come back to them.

You asked about echos within a PHP / HTML file. In any entry roughly 10 days ago, I suggested up to 4 layers of PHP code from back to front. The echos go in the front most layer. An example is my user agent template file. The variables are being build deeper and deeper and come out with the echo.

More generally, a .php file can be all HTML, all PHP, or both. If there is no php tag, then the HTML is going straight to output--straight to the client / browser--from top to bottom as any other code. When there is a php tag, that code is run, and any HTML below isn't run until the php tag ends.

You can even do conditional logic on the HTML. You can surround the HTML with { } of an if and conditionally output the HTML. I have an example of that somewhere. Remind me to find it if you don't find one.

Whether you can do the same thing in JavaScript is both simple and more complicated. The short answer is yes, but if the data is coming from the server, then it still has to get to JavaScript somehow. But yes, you can do the same things in PHP (server side) or JavaScript (client-side) with the cavaet that the data has to get to the JavaScript. I discussed this at some length "below" such as when I discuss using one big JSON rendered by JavaScript versus writing the HTML in PHP.

How the client and server interact is a big question in that there are at least several good answers.

You mentioned clicks in JavaScript. Yes, detecting clicks and what was clicked and what the click means more or less has to be done in JavaScript, or at least it makes more sense. You mentioned writing to a local JSON. Note that client side JavaScript can't write to an arbitrary local file. JavaScript is very limited for security reasons. There is "local storage" in JavaScript, but I'm not sure there is a point in using it in this case because everything has to go to the server anyhow.

As I mentioned several days ago, I tend to think you want to account for the user moving off the site and then coming back to it, so the cart should primarily live on the server keyed by the session ID. With some exceptions and alternatives, JavaScript data is lost when the user clicks away from the page.

Getting back to the cart more generally, it's probably time to start learning about databases. If you want to punt that, you can save things to your own files or whatnot. You are probably correct that the basic cart concept is harder than you thought. You'll have to learn about client-server interaction and databases, and that's before you do the payment / checkout.

I should probably bang together the simplest shopping cart I can manage--perhaps 2 items with arbitrary quantities. I assume you're done for the day, though. I'm not sure I can be so inspired if you're going to bed. Also, you might need to back up and do some more general playing around with a database. MongoDB would make my life easier. I could live with MySQL, but it would cause grumbling on my part. Relational databases are so 1990s. If you install MongoDB, I recommend Robo3T as a GUI.

Yeah, as I think about it, learning basic database stuff and "hello world" both for the command line and programming databases is probably going to be a detour for the shopping cart. We'll probably do a lot of back and forth on this. For now, I'm not sure how helpful it would be for me to create a shopping cart.

Need a database for the shopping cart?

Do you need a database for the shopping cart? The short answer is very likely yes. The longer answer is something I hinted at above and you did in your email. You mentioned the shopping cart as a JSON file. Yes, the cart can be a JSON file. It's somewhere between impractical and not particularly sensible to save that JSON file on the client side, but you could save it on the server side.

You could do something like that during development, but it's probably time to bite the bullet and learn databases. For one of many examples, if you had a bunch of shopping cart files, it's harder to answer the simple question of "How many orders need to be fulfilled right now?" As its name implies, the purpose of a database is to organize data.

which database?

MySQL is not installed on Kwynn.com. MongoDB most certainly is. I only have MySQL installed on my latest local system because I could not quite justify moving my main (but part-time) client off of it until a few weeks ago. Now I am moving him off of it, but that's in the present tense. I will be happy when I can delete MySQL from my system.

Part of my pitch for Mongo is that you've mentioned a JSON file, and that is one lovely thing about Mongo--you esentially toss JSON files right in the database. To get it into MySQL in a logical format is a lot more work.

With that said, you pointed out that all the PHP examples you've seen so far are MySQL. MongoDB works in PHP just fine, but I very likely am in a (small? tiny?) minority of PHP users. I assume the common PHP stack is still LAMP (Linux Apache MySQL PHP). Mongo shows up in the MEAN and MERN stacks (MongoDB, Express, Angular / React, Node.js), to name a couple.

On one hand, I leave it as an exercise to the apprentice to research trends in relational versus OO DBs. On the other hand, I can be reasonably sure that MySQL in particular isn't going anywhere anytime soon. There may or may not be a slight trend towards OO, but it must be slight at best.

This is yet another instance of "do something." There will be grumbling from me over MySQL, but just your point about examples is an argument for starting in that direction. (All of my GitHub code is MongoDB, but my code was not written as a tutorial.) Also, I might drop support for MySQL when I drop / delete the whole thing, but that may be months away. Right now I won't even bother with an estimate beyond "months."

On the other hand, just in the last few days I've started writing some of my Mongo code to be executed from the MongoDB command line starting from PHP. In other words, if you could find good lessons on Mongo, the first concern would be learning it generally from the prompt or better yet from Robo3T. If you can learn it generally, running it from PHP is almost identical to running it from Robo3T, now that I have libraries that do that more and more easily.

I'll stop there and see what you come up with.

entry 1 - responsiveness

Continuing the Q&A with my apprentice, today regarding "responsiveness" and such:

I mentioned the JS and CSS refresh issue "below." To recap, several options: you may have to put the code you're actively working on in the HTML file. Then a refresh should work. Remember that your CSS and JS files can be blank or commented out, waiting in the wings for the cut and paste back to them.

If you give the site a unique URL query http://localhost/?blah=1, then ?blah=2, etc., it might help. Also, Firefox will emulate a mobile view to a large degree. When you hit Control - Shift - I (as in India), there is a screen resizing icon on the right middle / at the top of the dev tools screen.

I'm sure there are other ways to solve the JS / CSS refresh issue.

Also, you may be misunderstanding the point of "responsiveness." Ideally the exact same CSS works for both. Ideally it's not a matter of detecting the type of device but writing fully dual-use code. For example, tiles in a grid will settle to 2 rows of 10 columns on a big screen, but will be one column on a small screen. A small number of the tools I use all the time are specifically meant for mobile. For that, I live with cartoonishly large text on a desktop. Keep in mind sizing by vw and vh--viewport width and height where 100 is full screen in that direction, but you do NOT use the percent sign. You can of course use the percent sign on other contexts. I may use a font size of something like 3vw so that the screen could hold roughly 33 characters width-wise.

With that said, there are probably times when you break down and use the CSS "@media" rule and / or detect dimensions in pixels or such.

I'm sure lots of text has been written on this topic. I am probably not the one to ask for much more, although I may find such issues interesting enough to dig at with you.

2022, January 2

First, an update to my previous entry: For a week or so there was some DARPA LifeLog (Facebook) activity around my very accurate clock web app. Then on the 30th it seems my clock was mentioned in a YouTube video and / or its live chat. I haven't tried very hard to track this down, but so far I have no idea of any more details. Apparently I hadn't updated my web logs locally when I wrote that blog entry on the 30th. I thought I had. Anyhow, it looks like somewhat over 100 people were playing with my clock roughly mid-day on the 30th my time. I have some indication that some came back the next day to count down New Year's in Australia--based on time and a few IP addresses I looked up.

With that, back to the dialog with my apprentice:

Even if you did use NetBeans, it would probably work with Java 11. Given that you're not using NetBeans, no, there is no need to mess with Java in any way.

As for my utilities that I clone as /opt/kwynn: just as with anything else, I don't see a big reason to "activate" / require_once / include them until you need them. You may need them soon, but we'll cross that when we come to it. I will offer two caveats, though, which should also re-emphasize two things I said a few entries ago:

When you're in CLI mode, sometimes you'll see PHP warnings from stderr that NetBeans colors in red. You may see them in web mode / HTML, too. As I mentioned, I changed my handler to kw_error_handler(), and it treats notices and warnings just like errors. After doing this for over a year on my own projects, I am sure it's the right answer. I couldn't do it in Drupal because it was throwing way too many warnings. Now I am doing it in the new version of my steady (but always part-time) paid project, so it will get battle tested more and more.

Perhaps this is too subtle a point for the moment. Also, I suspect PHP 8 does this to a degree--treats more issues as errors rather than a notice or warning. When and if you encounter not-quite-errors, though, keep this in mind.

Also, I have never regretted kwas() all over the place, and that has been battle tested with my paid project. kwas() lets you easily check your assumptions. The first argument is either truthy or else an exception is thrown with the (optional) message in the 2nd argument and an optional error code that I had forgotten about until a moment ago when I looked at the function definition again. Once again, this might be somewhat subtle, but you'll probably figure out how to use it soon.

2021, December 30 - web server log analysis - 3rd entry today

Below are 3 (or more) entries in my apprentice web dev "series."

In the last several weeks I have done more work on my web server log analysis. I'm back to the question of how many human beings read my site, as opposed to robots? Of those human beings, what do they read?

At least 79% of hits to this site identify themselves as robots. See my "user agent" page. I don't have a definite number, but I would guess that of the remaining 21%, half of those are also bots that pretend to be a browser, although the number may be higher.

Of THAT remainder, my own usage of my site is probably yet another half. So my estimate is that 2 - 3% of hits are other humans.

I can identify likely or definite robots in a number of ways. Including my own robots (that have very legitimate purposes), devs rarely update the fake user agent. If an alleged browser version is over a year old, that's very likely a bot. If it's 4 years old, which I see all the time, that's almost certainly a bot.

At least one bot makes precisely 2 kinds of queries: to my home page and to non-existent pages. It's almost certainly attempting to hack my system by calling non-existent WordPress pages and such.

AWS scans my site to make sure it's up. I can tell by the IP address ownership and because it only reads my home page.

A bot will fetch many HTML pages in a second. Humans don't do that.

I'm sure I'm missing a few.

Of the likely humans, I seem to have some "engagement" in that they move from page to page, but not a whole lot of engagement. In 11+ years of this incarnation of my site, something like 5 people have contacted me only based on the site.

This might all bring up the question of what's the purpose of having a site. The first answer is that I use it all the time. I have a number of tools I wrote that I use all the time. That's another topic, though.

Myself aside, I would probably keep it up, but that's also another discussion, perhaps for later.

Regarding human readers, recently I'm trying to figure the chance that my site will solve my housing problem. My jury is still out. I'll have to do a number of (many) hours more work to keep clarifying the log data, and meanwhile I should be room hunting by more direct means.

There's always more to say on this topic. Perhaps later.

2021, December 23 - 30 - probably beyond - web dev Q&A

This is an ongoing Q&A with one of my apprentices.

December 30 (Thu)

entry 3 - starting 21:59

Consider using the "section" tag when appropriate rather than div. I have starting doing it here, although I'm not totally consistent. I am almost certain "section" is new in HTML5. Note that every section needs an "h" or "hn" or "h1..7" header, so that's one way you know when it's appropriate. I am assuming section and div are otherwise identical in terms of their defaults, but I am not at all sure.

entry 2 - starting 21:39

Upon thought, I decided to publicly answer another part of your email:

To be honest, I have already forgotten the git features in NetBeans. I've pushed code several times since I mentioned it, and didn't even consider NetBeans. Perhaps I'll manage to use it before 2021 is over, or perhaps not.

Regarding styling, I guess I've become slightly more interested. I'd have to think about that quite a bit. Yes, there has been movement towards being slightly to somewhat more decorative, but I'd have to think about all the reasons why.

When you say "phone viewing," I suspect you meant to use another word. Do you mean talking on the phone twice, and offering to go live? That's a long discussion. I really should publicly explain my issue with the phone in some detail. Not now, though.

entry 1 - posted and announced just before and at 21:37 EST

Regarding your blog, it does validate, so that's a great start. I had almost no idea how you did the 1 second color transition. I vaguely knew such things were relatively easy, but I didn't know details. When I went to look at this color "thing," I ran into another pet peeve.

Immediately upon load / refresh, your page is showing console errors--both JavaScript and HTTP. Also, when you click on the main text of either entry, there is another error.

For unknown reasons, Firefox if not all (relevant) browsers get excited about a lack of favicon. Just toss mine or anything else in there for now. Just make the errors go away!

You may already know more than I do, but did you research the "-moz", "-webkit" and "-ms" properties? My understanding is that's for very old, non-compliant browsers. It may also be for very new features, though. I'm not sure what happened with all that. If it works on both your desktop and phone, I'd call it a win and not clutter your CSS with such things.

I am rewriting this part because before making such a fuss I should justify it. I know that by HTML 4.01 if not long before, styling had been removed from HTML itself as in HTML tags. That is, the "font" tag was gone and related tags. Therefore, I suspect that the "emsp" / tab HTML entity is frowned upon by HTML5 purists for the same reason. By using the tab, you are bringing styling into the HTML itself rather than as styling. I think you can use padding-left, and there likely other alternatives. I'm almost sure I've seen your spacing issue addressed by CSS.

Rather than projecting upon others, I will declare myself an HTML5 purist and frown upon it myself. In fact, for the first time ever I will use the HTML frown code. It seems appropriate to use it to frown at another code: I will even style it! There. You have been frowned upon.

Yes, that is an appropriate use of the style attribute rather than "class."

Back to whitespace, don't forget the "pre" HTML tag. There are cases where you don't want the default rules of HTML messing with your whitespace, and "pre" is one of the solutions to that. I use it just below and elsewhere in this page. (It's also useful when outputting numerology values and making them line up with the letters.)

Also, I would add the year and timezones to your blog. They don't have to be in the header, just somewhere. Hopefully those entries will be online for many years.

On that note, you may have noticed that my websites suffers from a variant of the Y2K issue. I started this incarnation of my site in 2010. I remember thinking about it at the time and deciding that my life would almost certainly be very different by 2020. Certainly I would not be pecking away at the same file hierarchy. The joke is on me. With that failed assumption, in URLs I numbered the years 0 for 2010, 1 for 2011, ..., 9 for 2019. Then I had to use 20 and 21 and very soon 22. This of course throws off the ordering of digits:

/t$ ls
0  1  2  20  21  3  4  5  6  7  8  9

THAT is annoying. Just a cautionary tale.

caching revisited - especially CSS and JS

Firefox and possibly others can be very annoying when it comes to caching / refreshing external CSS and JS. Often times I have given up and put the JS and CSS back in the HTML page just long enough to get it to refresh. I mentioned this several days ago that it may be worth quite a bit of coding to check JS and CSS refresh. I think you misunderstood what I meant because you had a file with the date in it. I meant using server-side code to get the filesystem date of all the files involved. Then you'd have to modify the JS and CSS on the server side and then run JS to make sure the changes are made. All of that is probably going too far. Some much easier things that might work are:

Click the JS / CSS link in the "view source" or debugger and hard refresh it. Make sure that is refreshed. Then hard refresh the HTML page. I think that almost always works. Sometimes adding a (literally?) random URL query makes the browser think it's a different page, and that works, such as blah.html?refresh=1. But then you sometimes have to keep incrementing the numbers. When you turn the cache off in Drupal, the query becomes a UNIX Epoch timestamp for that reason.

The situation on mobile can be so bad that you have to put JS and CSS that is under heavy dev in the HTML. Remember that you can make your page a .php rather than .html and simply "include" the styling. For that matter, you can write PHP code to switch back and forth between internal and external.

Remember that you can see caching or lack in the network traffic. I just refreshed your blog, and I had to do a hard refresh to get the CSS to go over the network again. I don't think it showed me explicitly that it was caching, it just didn't show the CSS going over the network. I think you can see the caching itself somewhere.

caching generally

I have suffered many issues over the years with caching in various contexts. I have harsh words for developers who don't make it very easy to definitively turn the cache off. Furthermore, caching is way overused. As 2022 approaches, if you can't make your process work in milliseconds without caching, there is probably something else wrong.

Firefox has a legitimate reason for caching because zillions of people use the browser and thus caching has saved an enormous amount of CPU time and network traffic over decades. However, Firefox should still have a way to definitively turn the cache off. Maybe it does, in fact. I think I've looked, though, with no luck.

back to your blog

When you talk about text wrap, I think you misunderstand what the flex box does. The flex box wraps entire divs. Whether text wraps is a separate issue.

As for centering an image, I have very little advice. Sometimes the "auto" value comes in handy for such things.

December 28 - 2 entries

8:55pm - starting to write

Note the previous entry, about an hour ago.

Do you need the cookies? As I said yesterday, you do at least for the scenario of someone accidentally closing their window and coming back to the site. You may or may not need them other than that; it depends on how you arrange your page.

How to implement them? It's as easy as I laid out yesterday. If you use my wrapper around the session functions, it's that easy. If you don't use my wrapper, see my caution about not restarting an existing session.

If you're considering doing it "from scratch," I would advise against in this case. Out-of-the-box PHP does the job splendidly. This is a case of "just use it." If you want to do it yourself, I'd save that for months from now. The short version is that the cookie goes out in the HTTP response header and comes back in the request header. There's no reason to mess with any of that now. You will want to see the cookie itself in control-shift-I storage.

As for an SSL cert, I very rarely bother with them on my dev machine. My implementation of sessions allows the session to ignore SSL on my dev machine and / or non-AWS machines. My functions assume live is AWS. I may have to deal with that at some point.

If you want to do it, I recommend certbot by Let's Encrypt. I installed it as a snap rather than an Ubuntu package. You have to register a cert against a domain name or subdomain, so you'll need to route such to your dev machine.

8:00pm (approximately)

We talked on the phone for a while. I was giving a lesson and solving webadmin problems as I walked.

Today's phone lesson was in part about Apache DocumentRoot and setting directory permissions for the www-data user / group.

Some reminders from that lesson... The path all the way from / to document root should have my recommended 710 permission and have www-data group access. Document root itself probably needs or should have 750 permission.

Considering changing everything else in ~ to 700 (dirs) or 600 (files). There is a chmod X tag that does this quickly. chmod can be used with the bitmask or "letter" flags and pluses and minuses.

If you ever figure out how to change the default such that files and dirs don't get such wide permissions, let me know.

Going back to the email exchange, he said he's going to try the JetBrains WebStorm IDE / debugger. Apparently it's proprietary, but he has a free-as-in-beer license from college. (I of course use free as in beer and speech Apache NetBeans.) This is a case where following Kwynn's Rule #1 is more important than open source versus proprietary. As long as he's using a debugger, I will try not to further comment.

> i'm going to hold off on using the integrated git vcs because i want to continue to learn the command line way of doing things and get familiar with that.

Agreed. With that said, I've been vaguely noticing that NetBeans has this stuff. Now that you mentioned it, I looked harder. It hadn't gone through my head that NetBeans has all the commands. For the usual tasks add, commit, push, I think I've got that down at the command line well enough that I may try out NetBeans' commands.

December 27

Starting from one of yesterday's emails, a flex box grid sounds good. I have found it useful. Perhaps some other time I'll do a recursive search on my web tree and find all the instances where I use it. I've considered giving you a copy of the site, in fact.

To various questions from both yesterday and today about the shopping cart and client versus server side... You'll probably want to use PHP sessions, which is a cookie with a unique ID. The PHP functions do it all for you, though, in terms of creating the id and managing the cookie. In kwutils.php, see startSSLSession(). This is at least one big "gotcha" that I solved with that function: the session functions get ornery if you start a session when there is already a session. In fact, doing so might lead to the horrible beast that goes to the effect of "output before [HTTP] headers." Which reminds me:

the output-before-HTTP-headers issue and intentionally NOT closing PHP tags in most situations

The following is a counter-rule to every other situation in programming. In all other cases I know of, if you open a tag you should close it. Do NOT close a PHP tag ?> unless the context demands it!

That is, unless the PHP is switching back and forth with raw HTML, you don't need to and SHOULD NOT close the PHP block / file. Look at my code. I am almost certain I am 100% consistent about this. I would be surprised if you found a counterexample in my GitHub.

The problem is when you have an included file in a mixed-PHP and raw-HTML file. In the included file, if you close the PHP tag and then hit a newline or even a space, that is considered raw text and will be outputted because it's not PHP. If it's an include file or otherwise outputted before the HTML itself begins, you'll get the "output before [HTTP] header" error.

That error indirectly led me to quitting two projects around the same time, many years ago. I spent a lot of time chasing that issue around. I think it took me months of calendar time to figure out what was causing it. And the way it happens is insidious. It's like a virus that seems to pop up at random. Those projects may have gone on for quite some time or even indefinitely, so that little bitty issue may have caused me an enormous amount of money. That's not even the situation where violating (what is now) rule #1 cost me even more, potentially. I'll come back to that.

back to sessions

So, when using sessions, make sure to return the session_id() if it's truthy (sic) rather than trying to restart the session, as my function shows. Then that function calls another in the same file that forces SSL. In your case, you'll (also) want to do the standard Apache rewrite that forces SSL anyhow. You'll want to do that because you're starting from scratch. I am afraid to do it for Kwynn.com at this point. It's on my agenda to thoroughly test it. Perhaps I'm being paranoid, though.

Once the session starts with session_start(), every time a PHP file is called from the client, the session_id() will give you a long-enough unique string. That will help with the shopping cart and otherwise keeping track of what a specific user is doing.

The PHP session functions create a cookie named PHPSESSID. It's 26 characters. I'd have to experiment to be sure, but it looks like a regex of ^[a-z0-9]{26}$ So 36 to the 26th power is 10^40. I think that'll suffice.

server versus client calculations

As for whether to use the client or server for calculations, you MUST at least check the calculations on the server for reasons you mentioned in one of your emails and I mentioned in the last few days in this blog. That is, if you rely on client data, a malicious user can change the price and thus total cost.

With that said, it's a toss up whether to do the initial calculations on the client side or server side. I tend to think it would be tedious to do every calculation on the server and send it up and down. It depends on several things.

"Where are the cart items stored?" If you are using one HTML page, in theory you can store it only on the client side until checkout. However, you should allow them to close the page (perhaps accidentally) and come back to it with the same session ID. (Sessions can last for years or decades, in theory.) Thus, the cart items should be sent to the server on each click and put in a database, keyed by session ID and perhaps other keys depending on how you're arranging the data. "Key" in this case means unique index or the fields that uniquely identify a row in relational or a document in MongoDB (object oriented DB).

You spoke of an unordered list in JavaScirpt. I have a few guesses what you mean, but I'm not sure. You can keep the cart in JavaScript as a global variable of the object type. Generally speaking globals are frowned upon, but this is a reasonable use case for them. The MongoDB database entry can be the JS variable to a large degree, other than the Mongo version will have the session ID and perhaps some other added fields. Remember that if you go to a new page, you've wiped your JavaScript, so you'd especially need to make sure the server had the cart by then. (Again, the server should probably have it upon every click.)

> [">" indicating apprentice's words] the button executes Java script and that takes the data from their selections and stores it in a shoppingCartTotal

Close. Perhaps more like global variable object GL_RO_APIZZA_CART is the entire cart, with or without the total. You may or may not want to save the client side total in a variable as opposed to displaying the calculation each time. That is, the number and type of each items needs to be in the cart, but you don't need to keep track of the total in a variable. This also goes to a larger issue of when you store data that can be calculated. (I don't think I'll elborate on that for now.) Which way to do it will probably occur to you when the time comes.

abstraction

> - my mind is abstracting away a lot of the data but I’m sticking to your principle of just getting something built. I can see how designing a template based system would be appropriate if I wanted to expand further with the software and incorporate into other small businesses. (Meaning making generic tiles that scan a database and pull in whatever data is there)

You have the idea. It's a tradeoff and a big question as to how much you want to abstract and when. One of my issues with Drupal and WordPress is that they have abstracted to the point they don't do anything specific well. Decades ago a comedian said, "I went to the general store, but I couldn't buy anything specific." That is part of the problem with CMSs.

So yes, in theory you can have generic tiles and generic interactions and calculations. It's hard to say how far you can take that before it becomes too generic / general.

crypto

Yeah, maybe. Sure. I'd get Federal Reserve Notes working first. (If anyone spends legal dollars at any pizza shop, anywhere, let me know. Legal dollars are still gold and silver minted with 2021 stamps by the US Mint. Paper and computer entries are at best fraudulent promises to pay real dollars at some point in the infinite future. The paper and computer bits represent private script created by the banks against your mortgaged house and other such collateral.)

corporations and legal protection

Note that I'm just an amateur legal hobbyist, so I can't give legal advice. With that said:

I am also currently judgment proof, so I'm not really one to talk, but with that said, I tend to think you're being paranoid. If your system accidentally charges someone $1,000, you return the money via the chargeback process, the same way a waitress would do at a restaurant. When the system first goes live, you should have access to the account for that purpose. You should probably always have access to the account.

As for credit card numbers, you're not storing them. The way PayPal and perhaps every other system can work is that the client pays on PayPal's system and your system gets a "callback" when the money is approved. You never see their credit card. For that matter, you don't need their real name, let alone their email. You just need something to tag their order with when they come to pick it up. This can be a small number if you recycle them often.

I'm curious if you can find cases of individuals or small companies being sued for bugs. At this point it should be legally assumed that software comes with no guarantees. It is said that if buildings were built the way software is written, the first woodpecker would end civilization. One of my professors addressed that. The comparison is simply not fair to us. Builders can see and put their hands on and test and inspect everything. There is no such visibile equivalent in software. We can only do our best within budget constraints.

Also, a not funny story along those lines. One of my brief quasi-apprentices created the type of corporation that fines you $400 per shareholder for filing taxes late. The business made absolutely zero money, and he was already paying fines. I howled laughing at that. I told him it was one of the best examples I'd ever seen of the cart before the horse, to which he (rather foolishly, as I'll explain) said that at least he had a cart.

Especially in the context of his "cart," people seem to forget that corporations (including governments) are not real in that they are not at all tangible. "The government" does not do anything, only people alleging to act for the government. You don't need a corporation to write software. You don't need a "cart." I've never incorporated and never seriously considered it. There was a situation many years ago where having an artificial entity tax ID would have saved me about $1,000, but the cost of creating and maintaining the entity probably would have approached that. I have no regrets.

That's a tax ID as opposed to a socialist insecurity number that refers to an equally artificial legal entity.

You may decide that there is enough reason to incorporate or write a trust. Trusts have the relevant legal protections of a corporation but don't need to be blessed by the government for their existence. If I were to go that route, I would create a trust.

One of my systems has processed something like $500k over several years in a somewhat different context. I did the first $25,000 "by hand" in that I processed each line item while watching it in the debugger (NetBeans) and stopping several times (breakpoints) for each item. I also had 20 - 30 checks, maybe more, to confirm that I was on the right account and only entering what the client approved. Yes, it was very nerve wracking at the beginning. After all these years, though, my "interlocks" and cross checks and such have done their job.

I've had a number of rather embarrassing bugs on much less critical parts of the system. At one point I lost a reasonable amount of data, although it was reproducible. In a rare event, two data-corrupting bugs have shown up in the last 5 weeks or so. One was likely a rather small amount of data lost that is also reproducible. The other might have caused some minor (moderate?) problems. But this project has a limited budget; I can only do so much testing in the areas where big money isn't at stake. With that said, I'd like to think I've learned something from 2 data-corrupting bugs in 5 weeks.

I know a good lawyer in your area, as we've discussed. :)

back to debuggers

I started this blog page 4 years ago in order to state rule #1, so it's at the bottom of the page. The quick version is "never dev without a debugger," as defined briefly just below.

Just to reiterate that Kwynn's rule #1 applies to both the client and server side. A browser's debugging tools can't help you on the server side. A "debugger" means that you can step through the code line by line, see where the code goes, and check the value of each variable at each point. echo(), print(), printf(), console.log(), etc. are not effective debugging tools. They have very limited purposes, and sometimes you can get away with this, but failure to use a debugger might literally have cost me $100,000s indirectly, so now the tale:

why I created rule #1 after the horse burned with the barn

This was years ago and the last time I tried working 9 - 5. I was working in Ruby and thus didn't know of a debugger. Quick searches didn't turn up any free ones. I don't remember if there were proprietary ones; in hindsight, $500 would have been worth it. I tried debugging with whatever Ruby's print() is. In part because I was so tired, I kept chasing my tail. Part of the problem was that they were using Heroku or something of the sort, which I didn't fully understand. The code was initiated from a worker process callback. A debugger would have brought that to light much faster. I never did solve that bug before I got tired literally beyond reason and quit

back to debuggers, again

Writing code in gedit and going into NetBeans just for debugging is perfectly fine as long as you aren't hesitating to debug because you're not already there. Also, NetBeans is better at HTML decoration (such as coloration) than gedit. For one, gedit has a very obnoxious bug that causes it to lose all the decoration when I do "h" tags. I just tried it; that bug is still there.

I'm not set on NetBeans as long as you use a debugger (more options below). I have had good luck with it for many years, though. It has a few quirks, but I can live with them. One quirk in 12.4: it will not kill your code, either in PHP or C. That is, you hit the kill / stop button, and rather than dying, the code will go on to the end despite breakpoints. That is somewhat annoying, but I've learned to live with it, too. I may have to write around that, though, for some code. Also, I have not looked into it; there may be a simple solution.

Years ago I used Eclipse, and I brielfy used Eclipse last year. It works. I'm almost certain PHPStorm is proprietary, but in this case I'd prefer you use proprietary software rather than not use a debugger. I'm fairly sure there are other options.

back to client v. server and security

> which raises a question: can i keep all the data that im building on client side for the check out car and do all my calculations on client side as well, then send those off to the payment processor?

Note that my apprentice had not seen the above before asking this. To reiterate the above: you can do the calculations on the client side, but you must also do it once on the server side to check the paid amount.

> i'm assuming it's bad to keep the prices client side

It's fine to send prices to the client side as long as you check / confirm / recalc on the server side. You only have to check once against the payment on the server side. It's probably easier to do it on both sides. It's tedious to go back and forth with the server, so build the cart on the client side. Then do it just once on the server side.

One thing I didn't mention above is that this is a case for using Node.js as your server side language. Then the exact same code can calculate on both sides. You can use also Node from PHP in at least two ways. In my generic logon / user identity system, I use the exact same code by calling Node as a shell script.

I've invested so much time in PHP that this relatively small issue isn't a reason to go to Node. It might be a reason for you to do so, though. I call it a small issue because it doesn't take long in any language to add up a total. That is, it doesn't take me long. What you're doing is non-trivial for a beginner. I'm sure you'll do some floundering.

I'm still decding on some of the basics of my own webdev. Because certain projects involved Drupal, I didn't have full freedom. Now that I do have full freedom, I'm still working on the best way to do things.

> because someone could essentially change the submission price manually on the cart resulting in bad behavior.

Correct. That is the sort of thing you're protecting against by confirming on the server side.

More generally, any data sent from the client cannot be trusted and you have to consider all the mischief client data can do. So there is SQL injection, injecting JavaScript (or links) into data that may be displayed on the web, and injecting large amounts of data just to run your server out of space. Those are just a few.

In the web contact form in progress in my GitHub right now, I check the format of the pageid. I limit the number of characters. I escape the text when I display it on an HTML page. In other cases, I make sure numbers are numbers. I probably have not thought of everything in that case, but the stakes are not particularly high.

modal

"Modal" is one of those terms that annoy me. They make it sounds like some very special thing. Is this a particular modal library, or is it just an example? How big is the library in bytes? I'll come back to this.

a rant on bootstrap.css

The minimized version of bootstrap.css (v3.4.1) is 120kb. The PHP date format uses this. I once decided I wanted my own copy of the table. I had a cow when I found how big bootstrap.css was. So I started with the "maximized" / dev version and whittled down what I wanted. I count 3.4kb.

This also goes to the issue of being so general that it doesn't do anything specific well. Drupal uses Bootstrap, which is one of many issues I have with Drupal. I have elements' styling overridden by Bootstrap. It's very annoying.

back to modal

Anyhow, "modal" sounds so special, but it's not hard to do yourself. First of all, do you need anything of the sort? Why not just a plus and minus and a text number box for quantity? As soon as they push that button, it goes into the shopping cart.

If you want a modal popup-like effect, you can use CSS z-index and fixed positioning. z-index is a pain to use the first time if you don't have the decoder ring, but it may be worth it in th end. Here is the "flag" example. I thought I had another example, but I'm not finding it with a recursive search (grep -R z-index). The key, as I remember, is that the elements involved must have a "position" attribute rather than the default static. If this gets out of hand, let me know. I'm sure there is another example. Also, I have an example in a client's proprietary code.

December 25

debugging PHP
a debugger

Remember that Kwynn's dev rule #1 is to the effect of "never dev without a debugger." I use Apache NetBeans as the GUI of my PHP and C debugger. Before NetBeans will install, though, you need both a JDK and JRE that I address below. I don't think NetBeans is in the Ubuntu package repositories anymore, so download it directly from Apache. I'm using version 12.4. As best I remember, you download the file and then "sudo bash file.sh" to install it. You run bash because otherwise you have to turn the execute bit on, which you can do graphically and easily via the command line, but just running bash should work, too. You need sudo because it's going to install stuff all over the file tree such as something close to if not precisely /usr/bin and /usr/lib and such.

NetBeans needs a JRE and JDK. Installation notes below. I'm pretty sure I have used higher versions of such than the following, but these work, so might as well install what I have.

I'm going to list what I have and then explain how they relate to the install commands. I'm going to somewhat change the output to remove clutter. There is some chance you'll already have something installed. If so, see if it works before messing around with different versions.

apt list --installed | grep jdk

openjdk-8-jdk-headless/impish-updates,impish-security,now 8u312-b07-0ubuntu1~21.10 amd64 [installed,automatic]
openjdk-8-jdk/impish-updates,impish-security,now 8u312-b07-0ubuntu1~21.10 amd64 [installed]
openjdk-8-jre-headless/impish-updates,impish-security,now 8u312-b07-0ubuntu1~21.10 amd64 [installed,automatic]
openjdk-8-jre/impish-updates,impish-security,now 8u312-b07-0ubuntu1~21.10 amd64 [installed,automatic]

You'll need to install those 4 packages, where the package itself corresponds to everything before the first /, such as "sudo apt install openjdk-8-jre-headless"

Eventually you'll need to install "php-xdebug"

Then you'll need to make changes to both /etc/php/8.0/cli/php.ini and /etc/php/8.0/apache2/php.ini at the very end of the file, or wherever you want; see just below. I put my name in a command to indicate where I started a change.

; Kwynn
xdebug.mode=debug
xdebug.client_host=localhost
xdebug.client_port=9003
xdebug.idekey="netbeans-xdebug"
           

Then restart apache (web server) for the apache-php changes to take effect: sudo systemctl restart apache2

Then "debug" a project inside NetBeans and you should get a green line in your code. Beyond that, I should give you a tour. And / or see if you can find discussion of how to use the NetBeans - xdebug - PHP debugger.

kwutils - very strict notice handling and "kwas()"

You'll note that many of my files begin with require_once('/opt/kwynn/kwutils.php'); /opt/kwynn is a clone of my general PHP utilities. You'll have to play with permissions to install it as /opt/kwynn. You can also of course do it however you want, but /opt/kwynn is probably a good idea if you want to easily run my code.

You and I should probably go over kwutils thoroughly some day and whittle on it. It's gotten somewhat cluttered, but I consider it professional grade in that I'm starting to use it in the new version of my regular (but 5 hours a week) paid project. Also, I've been using it on almost all my projects for about 18 months now.

In the first few lines of kwutils.php, I change the error handler such that notices and warnings kill your program just as thoroughly as a fatal error. I have never regretted this decision. It makes for better code. This may be less important in PHP 7 and 8, but I see no reason to change course. I don't think this would help much with your immediate bug, but it's relevant to debugging generally.

Combined with advice below, what would help is my "kwas()" function. It stands for "Kwynn assert," and I want it to have a very short name so that I am encouraged to use it ALL THE TIME, and I do use it all the time. First of all, in your case, use file_get_contents() rather than fopen and fread and such. I use fopen() very rarely verus "fgc".

kwas() does something like "or die()" but I like mine better for a number of reasons. Your code snippet just gave me an idea I should have had ages ago. I need to test something....

Ok, I just changed kwas() to return a truthy (yes, that's a technical word) value.

So now your code would look something like the following. I'm also going to change your path. The path issue might, in fact, be your problem. Also, if you're not using variables or a newline or something that needs to be substituted, use ' (single quotes) rather than " (double quotes). The __DIR__ is a more definitive way of saying "this file's directory." Simply using "." has issues that I have not entirely thought through. I am not guaranteeing the following will run. I'm giving you the idea. I'll never finish this if I test every snippet.

$path = __DIR__ . '/last-updated.txt';
echo(kwas(file_get_contents($path), "reading $path failed or was 0 bytes"));
            

All this may still leave you with another set of problems, so more stuff:

CLI versus web

Part of the problem you're having is that you're just getting a 500 error with no details. There are several ways to deal with that.

PHP is run in CLI (command line) mode or various web modes. Rather than figure out all the web modes, I have always found that logical "NOT cli" always means web mode. I address this more specifically below.

I mentioned /etc/php/8.0/cli and /.../apache2 So that means that there is a different configuration for each, and thus different defaults. There are several relatively subtle differences in running PHP each way. In case it's not clear, cli mode means "$ php blah.php" and web mode means Apache or another web server is running the PHP.

Generally speaking, you can at least partially run your PHP web files from the command line. In your case, I think you'd see your bug from the command line. Meaning "$ php index.php" or your equivalent. It's a recent practice of mine, so it's not burned into me, but I'm starting to think you should go somewhat out of your way to make sure your web PHP can run as seamlessly as possible as CLI (command line) for dev and debugging purposes. That is, you may have to fill in stuff that would otherwise be filled in from Apache. Running in web mode is somewhat more painful for a number of reasons, so you should leave yourself the CLI option.

kwutils has iscli() to indicate CLI (command line) mode versus web mode. It in turn it is using PHP_SAPI === 'cli' where PHP_SAPI is a runtime superglobal variable provided by the PHP interpreter. I mention this because in order to make your code dual-use (cli and web), you'll sometimes need to use that.

When you have the NetBeans PHP debugger working, you can see all the superglobals and their values.

error.log and error display config

Did you look at /var/log/apache2/error.log ? That probably has the specific error.

By default, web PHP turns off displaying errors because displaying errors (when there are errors) allows anyone on the web to get variable names and such and thus make various injection attacks easier.

Your development machine is exposed to the web, and I'd imagine if you look at your access logs, you'll see that others have already found it. You're running with a 32 bit (IPv4) address, and there are so relatively few of those that bots can find many of them easily enough. (I would not assume that 128 bit (IPv6) is better protection. I'd imagine the hackers have already narrowed down what's in use.)

I mention this because changing the error display even on your dev machine will be seen by the world, and your app will hopefully soon be used in "the real world." We should both give some thought to the implications, but I would err on the side of everyone seeing you err. :) As you see various messages, we should both consider what anyone could get from that. Otherwise put, this is a case of "security by obscurity" probably not being particularly secure.

Besides, this is a small shop, not a bank or crypto exchange. You can almost certainly use PayPal (or others) such that the user's data is not in your system, or at least it's minimally in your system.

With all that said, to turn on errors, change this in /etc/php/8.0/apache2/php.ini:

; Kwynn
display_errors = On
           

Then restart Apache. (The above is line 503 in my file.)

misc audio files as "music" - part 2

This is my 2nd entry and 3rd "h5" header for today. This also gets off topic from web dev, but this is a continuing discussion with one of my apprentices, so I'll leave it here.

This is a followup to non-audio files played as "music." First of all, the "stop" button works for me on Firefox 95 Ubuntu desktop. I haven't checked my web logs to see which user agent you're using. If you come up with a fix, I will almost certainly post it as long as it also works for me. I am not going chasing that bug now. You can add it to the endless list of stuff we might do much later.

As for how it works... Any audio recording encodes a series of volume levels; it's only a matter of how it's encoded. A CD "is a two-channel [stereo] 16-bit ... encoding at a 44.1 kHz sampling rate per channel." (1 / 44,100) === 0.000022676 or 22.676 µs. So, every ~22 microseconds the recording system records the volume of each microphone as a 16 bit number, so 65,536 possible volume levels.

The .WAV may be the original computer sound format. A quick search shows that the original WAV format was the same bitrate as a CD and that SatanSoft once again rears its head. A WAV has a 44 byte header and then the rest of the file is audio encoded as above or else variants of the sample rate and volume bits. For the "symphony," I used 8 kHz and whatever the default volume bits are.

The commands I used to create the WAV are just above the "play" button. I took an Ubuntu install ISO file and treated its bits as sound. (The ffmpeg command added a WAV header.) The result was interesting. It has a beat and an odd sort of music. There's no telling what other files would sound like. I'd imagine people have played with that.

Firefox caching

First of all, remember that you usually need to refresh a page before you see changes. Firefox can be stubborn about that. By default, Firefox does a soft refresh. Control - F5 should do a "hard" refresh, but even that doesn't always do the job. The problem gets worse with mobile browsers and external JavaScript and CSS. Consider putting versions or unique timestamps in all the relevant files to see if the right page is shown. Sometimes changing the query on the page helps refresh it, such as /?blah=1 /?blah=2 etc. The query doesn't have to be meaningful or used, but the browser interprets that as a different page, so it may refresh the cache.

When testing mobile, I have had to put JavaScript back into the HTML page as the easiest way to force a refresh of the JS.

To check CSS, sometimes I change the color of a certain element just to check the version. With JavaScript you can set a version with document.getElementById('versionElementForFileXYZ_JS').innerHTML = '2021_1225_1846_25_EST';

I have never taking the following to this extreme, but I suggest a technique below. Rather than going to extremes, once you're aware of the problem, you can usually eventually get everything to refresh. Also, I'm sure I'm missing options. I haven't gone looking all that hard once I understood what the problem was.

A perhaps too extreme measure would be combined server and client code that checks disk timestamps against what's rendered. For CSS, the server code would create a CSS tag like ".cssV20211225_1843_22_EST" or both human readable and a UNIX Epoch timestamp. Then the JavaScript would do a CSS selector query for the existence of that CSS tag.

W3 validator referer

Update: see my first January 13, 2022 entry. The "referer" generally won't work anymore.

Always point the validator check to https rather than http, such as https://validator.w3.org/check?uri=https://blah.example.com/page1.html. If you try to validate a secure page with an http link to W3, it won't work because the browser will not send a referer from a secure page to a non-secure page.

As to why "/check?uri=referer" works, I think I implicitly assumed for very long time that this was some sort of standard. It's much simpler, though. It's specific to that particular W3 validator tool. Whoever made that tool can write his "?" queries however he wants. It's written such that if you use the "referer" HTTP query argument, the code checks the HTTP request header for the "Referer". Look at your network traffic, and for a .ico or .png or .js or whatnot, you'll see a "Request header" "Referer" field which is a link back to the HTML or PHP page that called the .js file or whatnot. The W3 code reads that referer and thus knows what page to fetch. (control-shift-I and then the "Network" tab shows you the network traffic AFTER you load that tab, so you will have to refresh.)

I wouldn't call it an "API," either. Again, it's much simpler than that.

As for how I knew to link that way, I found the documentation, but I found it because I knew to look for it. Off hand, I did not quickly see that linked from the validator itself. Upon thought, my best memory is that my webdev professor in 2005 showed us that technique. He definitely pointed us to the validator.

As for reading request headers in PHP, one option is apache_request_headers(). I use this in my CMS ETag and modified time test, function exit304IfTS() at the bottom. I think I only implement one of the two so far. It's on my agenda to implement the other.

December 24 - 2 entries (so far)

16:38 EST entry

This continues a discussion with one of my apprentices, so I may switch from "he" to "you" again.

Today's edition begins with a question about a template and pulling from a database versus hard-coding the menu (see yesterday's entry below). He was concerned about loading delays. You'd have to be the average Indian so-called developer to delay loading that much, or a white American who doesn't understand databases worth a darn and uses loops instead of SQL joins. I once had a manager try to tell me that the queries were "very complicated" and thus they took several seconds. The queries were trivial, and the code should have run literally several hundred times faster.

The point being that loading delay in the context you mean has not been a problem on any hardware in the last 10 - 15 years.

You bring up a more interesting point, though. There is always a tradeoff between making data entry easy versus the entry code making the overall system much harder. Otherwise put, how much trouble do you want to go to at various stages of the project to make it easy for the pizza shop folk to make changes? Given my philosophy of "make something work now versus frittering on perfection forever," I would not worry yet about letting them make changes. At the start, you're presumably going to be on hand pretty much every day. Get the system making money, then decide when it's worth making the tradeoff to let them take some of the workload.

With that said, this brings up the question of validating prices on the server side. Say you hard-code $5 as the price of an item. The client orders one of them, but the client is mischievous and lowers the price to $1. You should always check such data on the server side. So this brings up the interesting question of how to encode the price such that it can both be rendered and checked easily. Putting the price in various data formats makes sense: a database, a CSV file, a JSON file, XML, raw text, etc. Then you'd have to do a bit of processing to render it, but you'd have the validation on hand.

17:42 entry

You mentioned a data object, or a DAO: data access object. This brings up a big question that has many possible good answers: how do you go about getting from the database to the HTML? I have gone back and forth between two methods. I give examples of both further below, once I explain them.

I'm about to explain my interpretation or variant of the MVC pattern or framework--model view controller. The model is the database code that works with the data model. You might call this the far back end (server-side). The controller is in the middle and interacts between the other two. The controller might be on the back end or front end (browser client). The view is the code that creates the human readable format including the HTML. The view may be created either on the front end or the back end to a degree, but the end result is part of the definition of the front end because it's the front side that the users sees.

A DAO whose only job is to interact between the database and the rest of the code is a good idea in some situations. Less strict but sometimes more practical is code that accesses the db and does the first round of transformations towards HTML.

Once again, you may want to make something work first, however you can. Even 2 - 3 years ago (18 months ago?), I might make a big mess in terms of the code logic, but the end result worked. Then I started cleaning the code, sometimes. Now I am actually starting to code with my variations on MVC. You can see the step by step progress in git commits.

I've gone back and forth between two variants of MVC. My jury is still out, but the technique I am starting to favor goes something like this... Write 2 - 4 layers of PHP code. One or two layers fetch from the database. The second back-end layer may process the data closer towards the end product. Then you may have a layer that makes the data completely human readable, such as turning float 5.0 to string "$5.00" This layer may also do the loop that creates an HTML string of table data. The final PHP layer can be echo statements embedded in HTML that write the final product.

Let's take my very recent user agent code. "p10.php" is the innermost layer. Often I actually use the term "dao." In this case I didn't, but p10 is serving as the DAO and it's doing the loops that lay out the data in an array that is close to the HTML table format. "p10.php" is the model. "out.php" is the inner view--the part of the view closer to the back-end model. It changes integer 25000 to string "25,000" and has the loop that creates most of the HTML. Then the template.php has "echo()" functions to write the strings.

The other technique is to create JSON at the PHP side and then let client-side JavaScript process the JSON. I did it that way in a previous user agent version.

I think the more recent way is better, but I'll know more when I get back to my long-term paid project. I'm going to have to make that decision soon.

17:56

Regarding an internal "style" tag or external CSS: I totally rewrote my home page yesterday and posted it an hour or two ago. I was running all over the place adding "class" attributes. I find it easier to have the class attribute and the relevant styling in the same page rather than switching back and forth. This may depend on how big the file is, though. For a big file, going up and down is harder. As I said yesterday, one answer might make more sense during dev and another once you're done dev'ing. I'm not making an argument against your point. I'm just explaining my reasoning.

Regarding big files, here is a thought. When you create a php file, it *IS* an HTML file until the <?php tag, like my "template.php" I mention above. One result of this is that you can use require_once() to add HTML fragments. So, with a large file, you can have a central PHP file that calls subfiles to put them together.

December 23

This is in response to an apprentice's question. He is continuing his own version of the pizza shop online ordering.

The following may or may not be off topic. Perhaps it's past time to say that, as far as I know, he has a pizza shop in mind that simply sells pizzas and is not owned by the man who is mysteriously the 49th most powerful man in Washington for owning a pizza shop. (In all these years, I'd never actually seen the text, but there he still is, 9 years later: #49 James Alefantis.) You'll note that I put $5 or $10 on my version, not $15,000 because I'm selling something other than pizza.

In any event, I will try to keep my technical hat on. In his version several hours ago, he had "pizza.php" and "salad.php" and such, activated by clicking each category of the menu on the left side. His asked my thoughts on this.

I'll switch to "you" rather than he. I have to start with my pet peeve. You didn't close a div, so I'm sure your page is HTML5 invalid. Firefox "view source" shows the close body tag as red; that's why I noticed. (I may have noticed by eye soon after.)

I have to appreciate your use of ":hover" and "active" (Why is it :hover and .active? That doesn't seem right, but it seems to work.) Remember that I try to avoid "pretty" web sites, so I'm only partially aware of such things. I'm glad you reminded me because it's a useful cue to the user. I probably use JavaScript in situations where CSS does the job more naturally.

You might consider pulling your styling into the one HTML page during parts of development. There are arguments either way. I find it useful to have everything right there. As you head towards going live, it probably makes sense to have an external style sheet, but I still argue with myself about that. I'm not sure there is one right answer, either. You can cut and paste your CSS between the two such that the blank external CSS is always there ready to go. There is no reason to remove the "link" tag or delete the external stylesheet, unless you firmly decide to stay within the HTML. And when the previous version is in git, you don't even have to be firm. :)

Now to the original question: about using separate PHP files in that manner. First of all, when you're doing one of your first web apps, whatever works or even heads in the direction of working is progress. With that said, there is no need to reload the page with full-page HTTP calls in your case. Once you have basics of the page, clicking on a menu category should call AJAX JavaScript and only refresh the center of the screen. The AJAX makes the call to PHP.

With *THAT* said, then you get into the question of "single page" PHP. As much as I despise WordPress and Drupal, their notion of single page probably has some merit, although I think they take it too far, and their version gets too complex. Single page means that there is a web server (Apache) rewrite in .htaccess that routes all requests through index.php. The index then routes the requests as needed.

Then again, the single page thing may be too much for now. I still have not used it when I'm writing from scratch, but I'm considering it. I may have an update on this in 2 - 3 weeks as I make this decision in a "real world," paid project. (It's not a new project.)

entry history

I expect I'l be revising this for a while, so it needs a history.

  1. nevermind. I hope I labelled the entries well enough
  2. 2021/12/24 17:56 EST - 3nd new entry, same
  3. 2021/12/24 17:42 EST - 2nd new entry, labeled with timestamp
  4. 2021/12/24 16:38 EST - new entry, labeled as "16:38"
  5. 2021/12/24 15:53 EST - fixed Alefantis link
  6. 2021/12/23 17:51 EST - prepping for first post

2021, August 28 - Asterisk compilation revisited

This is a follow-up to previous entries.

I have limited download bandwidth at the moment (long story), and I still haven't perfected VMs and / or Docker and such locally, so in order to get a clean installation and compilation slate, I'll rent an AWS "on demand" instance. Hopefully it will cost me 10 - 20 cents. I want an x86_64 processor so that it's closer to my own machine. I might as well get a local-to-my-instance SSD / NVME (as opposed to an EBS / network drive) for speed, and I should use "compute optimized" because I will peg the CPU for a short while. So my cheapest option seems to be a c5ad.large, currently at $0.086 / hour in northern Virginia (us-east-1).

Instance details: Ubuntu 20.04 (probably will remain the same until 22.04), x86 (just to make it closer to my local machine - "x86" is implied x86_64). Type c5ad.large. I would give it 12 GB storage for the EBS / root drive rather than the default 8. 8 GB may be too little. Assuming you have a VPC (VPN) and ssh keys set up, that's all you need.

Today's greatly improved compilation commands. Notes on this below.

For current versions, one of the first steps calls for downloading "asterisk-xx-current," so be sure to check the relevant Asterisk download directory for higher versions. Note that the versions are not in any useful order, so you'll have to look carefully and / or search. The documentation still references version 14.x.y. I compiled version 18.

When everything is compiled / you're done, the directories use exactly 1 GB (call it 1.1 GB to be safe), but that may grow with future versions.

When running the step "sudo ./install_prereq install" note that the US telephone country code is 1

Note that downloading dahdi and dahdi-tools from Asterisk, as shown in their directions, will not work in recent Linux kernels (5.11 or earlier) because the Asterisk versions are behind. My instructions have you compile the source.

The compilation of dahdi, dahdi-tools, and libpri are quick. Asterisk itself almost exactly 5 minutes. From reboot, elapsed time for this day's attempt #1 was 38 minutes; second attempt was about 23 minutes. I forgot to check the final one. I believe I posted attempt #4 above.

My previous attempt at compilation instructions (days ago), just for the record.

2021, August 22 - 25 - Cardano / Ada cryptocurrency

As of several days ago, I have a Cardano "stake pool" running. It is public, but, for a number of reasons, I'm not going to advertise it, yet.

These are notes on setting up a stake pool. In short, a stake pool is the rough equivalent of a Bitcoin mining node. Bitcoin is "proof of work" (mining); Ada is "proof of stake" (user investing). Bitcoin uses an absurd amount of energy to "mine." Ada's trust is established by the community investing in stake pools. That's the very brief sketch.

The official instructions are fairly good, but, as is almost always the case, they leave a few things out, some things are clear as mud, they make assumptions, etc. These are my annotations.

hardware requirements

Because the instructions start with hardware requirements, I will, too. I seem to be doing fine with 4 GB of RAM, HOWEVER... I have a big, fat qualifier to that further below. I am running two AWS EC2 "c5ad.large" type instances--one for the relay node, and one for the block producer. For "on demand" / non-reserved, $0.086 / hour X 24 hours X 30.5 days (average month) X 2 instances (block producer and relay) = $126 per month just for the CPU. Storage fees are more; that will take a while to nail down precisely; roughly, I'd say that's another $25 / month. Note that reserving an instance--paying some in advance--cuts CPU prices in half. See "reserved instances."

I'll express drive space in two parts. The EBS Linux root ( / ) is only using 2.4 GB with an Ubuntu Linux 20.04 image; the chain database is NOT on root, though (see below). If you decide to save / log the output of the node, note that the block producer has produced 118 MB of output in about 3.7 days; 242 MB in about 7 days. I assume the relay node is much less; I'll try to check later. The block producer outputs every second because it's checking to see if it's the slot leader. The "slot leader" is the rough equivalent of winning the Bitcoin mining lottery and producing a block on the blockchain.

As for the chain database, it is currently 13 GB. Based on everything I've seen, the rate of increase of the database is likely to grow for weeks or months.

After that 3.7 days, I have only been charged 9 cents for 1 GB of output to "the internet" outside of AWS. However, billing is several hours behind. (11 cents in 7 days)

As for their assertion "that processor speed is not a significant factor for running a stake pool...." That appears to be true for the most part, but there are some exceptions, just below.

exceptions to hardware reqs

Processing the ledger ($ cardano-cli query ledger-state --mainnet > /tmp/ledger.json ) used 4CPUs (cores), took 10GB of RAM, and ran for about 5 minutes. The ledger was 3.8 GB several days ago. It compressed to 0.5 GB. Don't try running this on a stake pool node / instance unless you're sure it can handle it, and it's just not worth risking unless it can really handle it.

I ran the ledger on an AWS EC2 c5ad.2xlarge instance. I ran it for 0.8 hours X $0.344 / hour = $0.28. That's how long it took me to copy the chain / database from EBS to the local nvme (ssd), load up the Cardano binaries and basic config, sync the database between the saved chain and the current change, run the ledger, compress, and download the ledger.

Similarly, I would be careful running any queries on a live stake pool. I have reason to believe that even short queries like utxo will slow the system down enough that it may miss its slot. In other words, a stake pool node show do nothing but route and process blocks. It should only be running the node; not foreground ad hoc commands.

The instructions try to push you to compiling or Docker, but the binaries available for x86_64 Linux work just fine. The binaries are linked from the cardano-node GitHub or the "latest finished" link. I am using v1.28.0.

You'll want to put the Cardano binaries path in your Linux PATH environment variable. While you're at it, you should decide where you're going to put the Caradao socket. It can be anywhere that your standard user has access to create a file. Cardano runs as the standard user, not a sudoer or root. I won't admit where I put mine because I'm not sure it's a good idea, but I call it, abstractly, /blah/cardanosock which assumes the standard user has rwx access to /blah .

Subsituting for your own binary path and socket, add these lines to ~/.bashrc :

				export PATH="/opt/cardano:$PATH" 
				export CARDANO_NODE_SOCKET_PATH=/blah/cardanosock
			

Then don't forget to $ source ~/.bashrc for every open shell. The contents of .bashrc don't load until a new shell is open or you "source"

I had never installed or used Docker before. On one hand, I got it all running very quickly, but I haven't learned to deal with Cardano's Docker image limitations yet. It was 40 MB when running, as I remember, which is impressive, but that leaves out too many commands. I may start with an Ubuntu docker image and try to build my own Cardano Docker image at some point. Beyond a quick test, I have not used Docker.

On the config file step, I would add that you need to use the same command, for both test and main, to get testnet-alonzo-genesis.json or mainnet-alonzo-genesis.json . Use the same wget command except substitute the appropriate alonzo file.

Wherever you see --mainnet , you subtitute "--testnet-magic 1097911063" (without quotes) for the current testnet. The addresses step shows you how to create a test address such as addr_test1xyzabc123.... In the testnet, you get test Ada (tAda) to play with from the faucet. Enter an address such as the above.

Note that you won't see your funds in utxo until your node catches up with the chain. I don't remember how long that took in test: somewhere between 1 - 5 hours. The config file page above shows you how to query your tip. Note that a slot is created every 1 second, so you are comparing your progress against historical slots. You can see where the test and main chains are at the test Explorer and mainnet Explorer. An epoch is 5 days, although I don't know if that ever changed. We are currently in the "Mary" era; I am still not sure if that's the same as the "Shelley" era.

For reference on the slot currents times, from which you can calculate the origin:

slot (elapsed seconds)as of
mainnet383825162021/08/26 03:33:27 UTC
testnet355797092021/08/26 03:35:25 UTC

On a similar point, if you are still syncing, when calculating your "--invalid-hereafter" make sure to calculate against the live chain, not your tip. Otherwise, your transaction will be immediately invalid (or will have been invalid a year ago).

The mainnet takes something like 27 - 35 hours to sync. Given that the chain is a linear chain, only 1 CPU / core can be used. Note that the density of data goes way up in the last several months, so you'll plow through historical seconds / slots, and then it takes much longer to process the last few months.

I never got the mainnet loaded on my own computer. For one, the fan ran like I've never, ever heard it before. One day I will likely sync my computer by downloading the chain (see more below).

Regarding utxo and transactions, it wasn't until one of the final steps that I was confronted with a situation where the payment of the 500 Ada stake pool deposit had come in 4 transactions, which is 4 utxos. I had to use 3 utxos to get enough Ada. Below I am shortening and making up utxo addresses / ids:

cardano-cli transaction build-raw \
--tx-in abcde#0 \
--tx-in abcdf#0 \
--tx-in abcdg#0 \
--tx-out $(cat payment.addr)+0 \
--invalid-hereafter 0 \
--fee 0 \
--out-file tx.draft \
--certificate-file pool-registration.cert \
--certificate-file delegation.cert
			

That command comes from the stake pool registration page.

Also when building transactions, keep track of the tx.raw and tx.draft. The draft command and raw command are similar, so it's easy to get that confused. Look at the timestamp and file order of the tx.raw and tx.draft to help keep track. If you mess this up, you'll get a "ValueNotConservedUTxO" error, mixed in with a bunch of other gibberish (partial gibberish even to me!).

Once you submit a transaction successfully, it will show up in the Explorer (see above) within seconds, perhaps 20 seconds at most. Deposits show up in the Explorer as deposits.

Regarding topology:

block producer (I changed the exact address, but it is a 10.0.x.y, which is the relay's address on the same VPC):

$ more kwynn-block-producer-topology-2021-08-1.json
{
  "Producers": [
    {
      "addr": "10.0.157.52",
      "port": 3001,
      "valency": 1
    }
  ]
}
			

Assuming the block producer is running on port 3001, the block producer firewall only needs to admit 10.0.157.52/32 for TCP

relay:

$ more kwynn-topology-relay-2021-08-1.json
{
  "Producers": [
    {
      "addr": "relays-new.cardano-mainnet.iohk.io",
      "port": 3001,
      "valency": 2
    },
    {
      "addr": "10.0.157.53",
      "port": 3001,
      "valency": 1
    }
  ]
}
			

The relay needs to admit "the world" on TCP 3001 (or whatever port it's on) because it's receiving from the world.

final stake pool steps / public pools

Using Github for storing metadata is a good idea. Note that the git.io URL shortcut will work for anything in GitHub, including repository files or specific repository versions. That is, you don't have to use a Gist. I am using a standard repo file and / or a specific version; I don't remember what I settled on. The metadata hash is public, so I saved it in the repo.

(My site is getting queried 30 times a day for testnet; I really need to de-register that thing one day, and return the utxo to the faucet.)

You have to pledge something in the "cardano-cli stake-pool registration-certificate" command, but it seems that it doesn't matter what you pledge. I would assume that the amount has to be in payment.addr, though. The pool cost must be at least the minimum cost as defined in protocol.json in "minPoolCost". pool-margin can be 0 but must be set. You do not need a "single-host-pool-relay" if you're not using one; an IP address does fine.

As far as I understand, you do not need a metadata-url or metadata-hash, but that's what defines a public pool. See below.

public pools specifically

This Cardano Docs page appears to define what a public pool is, but so far I can't get my client's ticker to list on AdaTools. I can get it to list on AdaTools and Pool.vet by pool id:

What I think of as the final stake pool registration page has this command:

cardano-cli stake-pool id --cold-verification-key-file cold.vkey --output-format "hex"

That pool ID is public--it's in the public ledger. It begins with "pool". I'll use other pools as examples, but both show by poolID: AdaTools by pool ID and Pool.vet by pool id.

Pool.vet by ticker works for my client, but AdaTools does not find it in its search.

More importantly, he can't find or pledge to his pool in his Cardano Daedalus wallet. Otherwise put, I seem to be having problems declaring his pool "public," even though pool.vet shows that the metadata hashes match. My only theory at this moment is that I created the pool during epoch 285; epoch 286 is now, and I set it to retire at the end of 286. It's possible that the wallet won't show a pool set to retire in a few days. I thought I had properly un-retired the pool, but results are uncertain after several hours. So far I haven't processed the ledger again to see if the retirement is cancelled.

entry history

I wrote much of this on August 22, 2021.

stuff to update (note to self)

2 VOIP / SIP / Asterisk / voicemail entries:

2021, August 22 - VOIP / SIP / Asterisk - voicemail working

Per my previous entry (7/26), I got voicemail working on July 31. It's taken me a while to write it up in part because I started another project that I hope to write up soon.

I wound up changing my Asterisk system to UDP, so if you're following along at home, be sure to set Amazon Chime's console to UDP. For the moment that's step 16 in the previous entry.

The outgoing voicemail message is limited to a subset of audio formats and "settings." I used an 8kHz, 32 bit sample, mono file. I'm almost certain one can use a higher sampling rate, but it will do for now. The Linux / Nautilus metadata says a 128 kbps bitrate for that file. I assume the math works out. I leave that as an exercise to the reader. My file is kwprompt3.wav placed in /var/lib/asterisk/sounds/en . You'll see the kwprompt3 without the wav necessary in extensions.conf.

The big problem I had getting voicemail working was that everything would work fine, and then Asterisk would hang up after 30 seconds. That's particularly funny because my potential client is seeking developers because none of the VOIP / voicemail providers allow a voicemail over 10 minutes. My client potentially needs several hours, or perhaps somewhat beyond that. Effectively, he needs unlimited voicemail.

The two keys that led me to a solution were setting logger.conf to give me very verbose outputs--the (7) indicates 7x verbosity. I've seen examples give 5x, so I don't know if 7x gives any more, but it works. The other key was to set "debug=yes" in pjsip.conf, shown in the same file above.

When I called the voicemail phone number and looked at /var/log/asterisk/full, I would see the SIP INVITE transmitted over and over. I don't remember which way the INVITE goes; the packets are sometimes hard to interpret. In each INVITE, I would see 2 lines that began with "Via: SIP/2.0/TCP" and "Via: SIP/2.0/UDP" The lines were next to each other. The TCP line was to an external IP address; the UDP line was to an internal IP address (10.0.x.y). The Amazon Chime system that was routing the call to me is definitely external to my AWS VPC / VPN, so this was a big hint: the INVITE exchange was not being completed because the packet wasn't going from my system to the external internet. After 30 seconds, Asterisk would issue a SIP BYE command and hang up.

It took me several hours to stumble across the solution: at least one of the entries "external_media_address" and "external_signaling_address" in pjsip.conf (see previous conf links). I set them to the external IP address (Elastic IP) of my Asterisk instance / virtual machine. Then it worked!

Given my setup, the voicemails are stored in /var/spool/asterisk/voicemail/vm-try1/1/INBOX . The same voicemail is stored in 3 formats. I assume that is the line in voicemail.conf "format = wav49|gsm|wav" That's a 1990s era raw wav format, a modern, compressed WAV format (wav49, apparently), and a gsm format. The WAV and GSM are of a similar size. Given the purpose of this project, keeping the raw wav format is probably worthwhile. Off hand, I hear very little difference, but I have not tested that hard and with very many voices / conditions.

So far my potential client left a 42 minute voice message which, as best I can tell, worked fine. (I have not exhaustively tested it, but that's another story.)

2021, July 26 - VOIP / SIP (last revised roughly 9:30pm my time)

The result of the following is that I reserved a phone number and dialed it and got literally "hello world" from my Asterisk server.

A few days later I had voicemail working. Voicemail is in my August 13 entry.

Asterisk

I answered an ad about VOIP. The key of the project was that the client needs to be able to leave more-or-less arbitrarily long voice messages. I haven't gotten to the point of just how long, but definitely well over 10 minutes. I would guess that an hour is needed. The problem they had is that they talked to 15 VOIP providers and noone went over 10 minutes.

I had a brush with a VOIP project in early 2016, and I've always wondered "What if?" I played some with the Asterisk software but couldn't make much of it. I compiled it and had it running in the barest sense, but didn't get it to do anything. Asterisk is of course free and open source.

In part because I had unfinished business from 2016, I started experimenting. Then I got obsessed and started chasing the rabbit. After about 21 hours of work spread over a week or so, I have most of the critical elements I need in two "pieces"--part in the cloud and part on my own server.

UPDATE: I greatly improved the following on August 28. I eliminated all "tail chasing." I also wrote up new notes in a new blog entry.

Here is an attempt at an edited version of my Asterisk install command history. One important note is that some of that was probably tail chasing versus:
sudo ./install_prereq
sudo ./install_prereq install

Then I changed 4 config files.

Probably more to come, but I have an apprentice live right now reading this.

AWS

In almost all cases, the AWS documentation is excellent. In this case, I chased my tail around. In the end, I got somewhat lucky. Of all the weird things, I have the darndest time finding the right AWS console. The link is for the AWS Chime product including "voice connectors." So THERE is the console link.

I have the "hello world" voice which will probably download and not play. Someday perhaps I'll make it play. It's a lovely, sexy female voice--a brilliant choice on the part of the Asterisk folk. REVISION: I got some grief over "sexy." Perhaps she's only sexy when you've spent 21 hours getting to that point.

I just confirmed that the Chime console does not save in your "recently used" like everything else does. So I'm glad I recorded the link.

At the Chime console, you'll need the 32 bit IP (IPv4) address of your VOIP server, or domain name. With only a bit of trying and study, I could not get 128 bit IP addresses (IPv6) to work--they were considered invalid.

  1. At the Chime console, go to "Phone number management," then "Orders," then "Provision phone numbers."
  2. Choose a "Voice Connector" phone number. (I am using SIP, but don't chose that option.)
  3. Choose local or toll free, then pick a city, state, or area code. Pick a number or numbers and "provision."
  4. After "provision" / ordering, it may take roughly 10 seconds to show up in the "Inventory" tab. You can use the table-specific refresh icon to keep checking (no need to refresh the whole page)
  5. Go to "Voice connectors" and "Create a new voice connector"
  6. The name is arbitrary but I believe there are type-of-character restrictions
  7. You'll want the same AWS region as the VOIP / SIP server.
  8. I have not tried encryption yet, so I disable it. (One step at a time.)
  9. "Create"
  10. click on the newly created connector
  11. Go to the "origination" tab
  12. Set the "Origination status" to Enabled
  13. Click a "New" "Inbound route"
  14. Enter the IP address or domain of the Asterisk "Host"
  15. the port is 5060 by default
  16. protocol is whatever you set the VOIP server to. I used TCP for a test only because it's more definitive to tell if it's listening
  17. set priority and weight to 1 for now. It's irrelevant until you have multiple routes.
  18. Add
  19. Save (This addition step trips me up.)
  20. Go to the "phone numbers" tab and "assign from invenstory." Select your phone number and "assign..."
  21. Set /etc/asterisk/extensions.conf to the phone number you reserved (see my conf examples above)
  22. Restart Asterisk if you changed the number. There is a way to do it without restart.
  23. make sure Asterisk is running - I find it best to turn it off at the systemctl level and simply run "sudo asterisk -cvvvvvvv" Leave the Asterisk prompt sitting open so you can see what happens
  24. open up port 5060 at the AWS "security group" level for that instance
  25. Dial the number and listen to "Hello world!"

2021, July 8 - zombie killing

I can now add zombie killing to my resume. I logged into this website roughly 30 minutes ago and was greeted with the "motd / message of the day" message that there were 75 zombie processes. I barely knew what a zombie is.

First I had to find out how to ID a zombie. The answer is ps -elf | grep Z    My new "simptime" / simple time server was causing the problem.

It didn't take long to more or less figure out what a zombie is, but it took just slightly longer to find what to do about it. When a process forks, the parent is supposed to be fully attentive waiting to receive the exit / return value of the child, or it is supposed to make itself available (signal handler) to receive the value. If the parent is sleeping or waiting for something else, the parent never reads the return, and the child's entry stays in the process table. The child is dead and not using any other resources, but one potential problem is that the process table fills up. Another problem is that the ps command (depending on switches) shows a bunch of "defunct" entries. (Similarly, there may be more entires in /proc/).

A Geeks for Geeks zombie article explained how to stop the zombies; I chose the SIG_IGN option which tells the OS that the parent doesn't care what the exit value is, so the child's process entry is removed. I don't care because, for one, I have other ways of testing whether the system is working. For another, the parent can't "wait()" in my case because its job is to immediately start listening for more connections. Another option is a signal handler, but there is almost no benefit to the parent knowing the value in my case. Again, I have other ways of testing whether everything is working.

2021, July 5 - yet another round with a blasted CMS

I have encoded below my software dev rule #4 about being careful of CMSs. I got burned again last night--Happy July 4 to me! I am building an Ubuntu 21.04 environment from scratch as opposed to upgrading. There are several reasons, but I suppose that is another story. Anyhow, I was trying to get Drupal 7 to run in the new environment. Upon a login attempt, I kept getting a 403 error and "Access denied" and "You are not authorized to access this page" even though I was definitely using the right password.

To back up, first I was getting "PHP Fatal error: Uncaught Error: Undefined class constant 'MYSQL_ATTR_USE_BUFFERED_QUERY' in /.../includes/database/mysql/database.inc" Thankfully I remembered that it's Drupal's crappy way of saying "Hey, you don't have php-mysql installed," so sudo apt install php-mysql Note that you have to restart Apache, too.

Similarly, Drupal's crappy way of saying "Hey, you don't have Apache rewrite installed" was a much more tangled path. I foolishly went digging in the code with the NetBeans debugger. This is a case of "When you're not in the relevant parts of Africa, and you see hoof prints, think horses, not zebras." I assumed a problem with Drupal rather than the obvious notion that something wasn't set up right.

I eventually got to code that made it clear that the login was not being processed at all. By looking at the conditions, I eventually realized that Drupal wasn't receiving the login or password. Then I realized that none of $_REQUEST, $_POST, or $_GET were showing the login and password. So I searched on that problem and quickly realized that it was a rewrite / redirect problem.
sudo a2enmod rewrite
sudo systemctl restart apache2

Problem solved! I won't admit after how long.

I was inspired to write some code for the "Never again!" category (a more legitimate use of the phrase than some, I might add).

2021, March 4 - 5 - Robo3T copy

The makers of Robo3T have started asking for name and email when you download. R3T is of course free and open source (software - FOSS), as is almost everything I use. I got the latest version directly from them, but I thought I'd provide it for others. Providing it for others is part of the point of FOSS.

Download - robo3t-1.4.3-linux-x86_64-48f7dfd.tar.gz

SHA256(robo3t-1.4.3-linux-x86_64-48f7dfd.tar.gz)= a47e2afceddbab8e59667facff5da249c77459b7e470b8cae0c05d5423172b4d
Robo 3T 1.4.3 - released approximately 2021/02/25	

I'm messing with this entry as of the 5th at 12:08am my time. I first posted it several minutes ago.

2021, Jan 31 - yet more on time measurement and sync

I'll go back a year and try to explain the most recent manifestions of my time-measuring obsession. I wasn't so much interested in keeping my computer's time super-accurate as I was interested in how to compare it with "official" time. Otherwise put, how do I query a time server? The usual way turned out to be somewhat difficult. (It just occurred to me a year later that perhaps NTP servers don't check the incoming / client time info. Or perhaps they do. In any event...) The usual way is first demonstrated in my SNTP (simple NTP) web project (GitHub, live).

During those explorations, I found the chrony implementation of the network time protocol (NTP). This both keeps "super" accurate time, depending on conditions, and it tells you how your machine compares to "official" time. That kept me happy for a while, but then I started wondering about the numbers chrony gives me.

So I updated the web SNTP code and made a command line (CLI command line interface) version. (Note that in that case I'm linking to a specific version because that code will likely move soon.) In good conditions, that matches chrony's time estimate well enough. Good conditions are AT&T U-Verse DSL at a mere 14 Mbps download speed accessed through wifi with 60 - 80% signal strength. Both U-Verse and my wifi signal are very, very stable. (I think it's still called DSL, even after ~22+ years. It involves something that looks like a plain old telephone line, although I can't be sure it's the same local wireing as 40 years ago.)

I can use the "chronyc tracking" command to get my time estimate, or I wrote a tabular form of it.

Below are my chrony readings as of moments ago (5:40pm my time). I'm removing some less-relevant rows.

/chronyc$ php ch.php
 mago    uso    rdi      rf    sk   rde      f
145.3     +0  50.91    -0.18  13.1   65   -7.794 
 96.3   +719   1.40    40.71  13.1   36   -7.794 
 95.2    -56   0.97    -0.20  10.5   37   -1.487 
 89.3    -63   1.59    -0.05   1.9   36   -5.476 
  1.8    +10   1.06    -0.00   0.3   36   -7.450 

Weeks later... I'm going to let this post die right here, at least for now. I hadn't posted this as of March 3.

2021, Jan 29 - chrony continued

As a follow up to my previous entry, now I've set minpoll / maxpoll to 1 / 2 with my cellular network. THAT gets results. My offset time approaches that of a wired connection, and it's the same with root disperson and skew.

2021, Jan 28 - chrony on wired versus wireless

chrony is a Network Time Protocol (NTP) client / server; in other words, it helps computers keep accurate time by communicating time "readings" over the internet.

In the last few weeks I have set chrony to use kwynn.com as its time source. Kwynn.com lives on Amazon Web Services (AWS). AWS has a time service, and my "us-east" AWS region is physically close to the NIST time servers in Maryland. Right now I have a root disperson and root delay of around 0.3ms, and my root mean square offset from perfect time is 13 microseconds (us or µs). I have 3 - 5 decimal places after that, but I won't bore you any more than I already am. The point being that it's probably just as good or better than using the NIST servers.

I've tested kwynn.com versus using it plus other servers in the Ubuntu NTP pool, and kwynn.com is much, much better. This is one of several stats that I may quantify one day, but I want to get the key point out because I found it interesting and want to record it for myself as much as anything.

Among other features, chrony has the "chronyc tracking" command that gives you an estimate of your clock's accuracy and various statistics around that estimate. Then I check check chronyc against I script I wrote that polls other servers and outputs the delay, including an arbitrary number of polls of kwynn.com. Sometimes I'll query kwynn.com 50 times, seeking the fastest turnaround times which in theory should be the best. I call this my "burst" script.

On AT&T UVerse (I think that's still "DSL.") at what is probably the slowest available speed (14 Mbps / 1.4 MBbs), chrony is very stable. What chrony says versus "the burst" is very close.

On my T-Mobile (MetroPCS) hotspot, things get more interesting. Sometimes when I cut over from AT&T to wireless, my time gets pretty bad and the chronyc readings are very unstable. This evening it was so bad that I changed my minpoll / maxpoll to 2 / 4. (Depending on my OCD and my mood, I tend to have it on 4 - 5 / 6 - 7.) Note that you should not use such numbers or even close with the NTP poll, and you may or may not get away with it using NIST--please check the fine print.

When I set min / max to 2 / 4, that's when things got interesting. On one hand, the chronyc numbers stabilize to the point that they get close to wired numbers. On the other hand, comparison to "the burst" is not nearly as "convincing" / close as wired. That is, chrony claims accuracy in a range of 100 - 300 us, but it's hard to get a "burst" to show 3 - 4 ms. The burst almost never shows time as good as chrony claims, but that's another discussion.

Otherwise put, with a low poll rate on wireless, chronyc claims to be happy and shows good numbers, but agreement with the burst is not nearly as close.

This is mostly meant as food for thought, and perhaps I'll give lots of gory details later. I mainly wanted to record those 2 / 4 numbers, but I thought I'd give some context, too.

2021, Jan 23 - detecting sleep / hibernate / suspend / wakeup in Ubuntu 20.04

In Ubuntu 20.04 (Focal Fossa), executables (including scripts with the x bit set) placed in /lib/systemd/system-sleep/ will run upon sleep / hibernate / suspend and wakeup. This is probably true of other Debian systems. I mention this because for some distros it's /usr/lib/systemd/system-sleep/

One indicator I had is that the directory itself already existed and 2 files already existed in it: hdparm and unattended-upgrades. There are some comments out there that /lib/... is correct for some Debian systems, but I thought this was worth writing to confirm.

example script

/lib/systemd/system-sleep$ sudo cat kw1.sh
#!/bin/bash 
echo $@ >> /tmp/sleeplog
whoami  >> /tmp/sleeplog
date    >> /tmp/sleeplog
	

The bits:

/lib/systemd/system-sleep$ ls -l kw1.sh
-rwxrwx--- 1 root root 158 Jan 23 18:18 kw1.sh
	

output:

$ cat /tmp/sleeplog
pre suspend
root
Sat 23 Jan 2021 01:39:49 AM EST
post suspend
root
Sat 23 Jan 2021 06:08:02 PM EST
	

The very careful reader will note that the script above is less than 158 bytes. I added a version number and a '******' delimeter after the first version. I'm showing just the basics, in other words, and I'm showing the parts that I know work.

2020, Nov 20 - arbitrary files played as "music"

As part of my now-successful quest for randomness from the microphone, I came across non-randomness from a surprising place. I generated the following audio file with these steps:

dd if=~/Downloads/ubuntu-20.04.1-desktop-amd64.iso of=/tmp/rd/raw.wav bs=2M count=1
ffmpeg -f u8 -ar 8k -ac 1 -i /tmp/rd/raw.wav -b:a 8k /tmp/rd/ubulong.wav
ffmpeg -t 1:35 -i /tmp/rd/ubulong.wav /tmp/rd/ubu95s.wav
chmod 400 /tmp/rd/ubu95s.wav
mv /tmp/rd/ubu95s.wav /tmp/rd/ubuntu-20-04-1-desk-x64-95-seconds.wav

Turn your speakers down! to about 1/4 or 1/3 of full volume. I now present Ubuntu Symphony #1 - opus 20.04.1.1. There is a bit of noise for less than 2 seconds, then about 3 seconds of silence, and then nearly continuous sound.

I posted several versions quickly; the final version was posted at 6:27pm on posting day.

I'm adding some discussion a year later.

2020, Oct 15 - SEO

In the last few weeks I finally took a number of SEO steps for this site. I'd been neglecting that for years. I registered the httpS version of kwynn.com with Google, and I created a new sitemap with a handful of httpS links.

A few weeks after the above, I got some surprising Google Search Console results. I have 247 impressions over 3 months for my PACER page. I only have 6 clicks, and I suspect that's because the page's Google Search thumbnail / summary / whatever shows an update date of November, 2017, which is incorrect. Soon I am going to attempt to improve that click through rate.

limitations of RAM, speed, etc. 2020, Oct 7 - entry 2 of the day

My only active apprentice just bought an ArduinoBoy in part because he is fascinated to wrestle with 1980-era limitations of RAM and such. As I discussed with him, I am not disuading him from that. However, I wanted to give him something to think about.

Last night I managed to crash several processes and briefly locked up my session because I didn't consider that there are still limitations on relatively modern hardware. It's much harder to do that much (temporary) damage today than it was in 1995 or 2003, but it's still possible.

Generally speaking, I was testing something that involved all cores at once and as many iterations as I could get. I got away with 12 cores times 2M iterations (24M data points total). Then I ran that again without wiping my ramdisk (ramfs), so I was able to test 48M data points. Then when I tried to run 12 X 8M = 96M, my system went wonky.

I have not done a post-mortem or simple calculations to know what specifically went wrong. I probably exceeded the RAM limitation set in php.ini. I may have exceeded system RAM, but I don't think so. What is odd is that my browser crashed, and it was just sitting there innocently. It was not involved in the wayward code. All the CPUs / cores were pegged for a number of seconds, but that shouldn't have that effect.

Maybe he'll want to figure out what went wrong and how to most efficiently accomplish my testing?

On a related point, one thing I learned is that file_put_contents() outputing one line at a time simultaneously from 12 cores does not work well, which makes perfect sense with a few moments of thought. So I saved the data in a variable until the "CPU stuff" was done and then wrote one file per process. (fopen and fwrite were not notably faster in that case.)

So how do I accomplish the testing I want with as many data points as possible, as fast as possible, without crashing my session (or close enough to crashing it)? The question of limitations applies on a modern scale.

Apparently the current version of the code is still set for 96M rows. The October 3 entry of my GitHub guide explains what I was doing to a degree. I'll hopefully update that page again sometime this week, and try to explain it better.

I also observed several weeks ago that forking processes in an infinitely loop will very thoroughly crash the (boot) session to the point of having to hold down the start button. Up until very roughly 2003, when I was still using Satan's Operating System, any infinite loop would crash the session. Now a client-side JS infinite loop will simply be shut down by the browser, and similarly contained in other situations. But infinitely forking processes on modern Ubuntu will get you into trouble. I suppose that's an argument for both a VM and imposing quotas. I took the quota route.

As best I remember, the code in question was around this point (AWS EC2 / CPU metrics process control).

new rules of software dev - numbers 3 and 4 - 2020, Oct 7 entry 1 of the day

The first two rules are at the beginning of this blog.

Kwynn's rule of software dev #3:

Never let anyone--neither the client nor other devs--tell you how to do something. The client almost by definition tells you what he wants done, not how.

This applies mainly for freelancing, or perhaps one should freelance in order to not violate the rule.

I should have formulated this in 2016 or 2017. I finally had one last incident in the summer of 2020 that caused me to formalize it, and now I'm writing it out several weeks later.

To elaborate on the rule, if you know all the steps necessary to do something in a certain way, do it. After it's done your way, no one is likely to argue with you. If you try to do it someone else's way, you are likely to waste a lot of time and money.

An example is beware of when the client requests that you do the quick fix. If your way is certain and the quick fix is uncertain, by the time you do the quick fix, you would have both fixed the problem and had a better code base by doing it your way.

Another statement of the rule is to beware of assuming that others know more than you do. Specifically beware of those who you may think are developers but are actually developer managers or salespeople with delusions of developing. I once knew a developer manager who exemplified the notion "He knows just enough to be dangerous." He led me into danger.

Kwynn's rule of software dev #4:

Custom-written software is often the best long-term solution. Be very careful of content management systems, ERP systems, e-commerce systems, etc.

To quote a comedian from many decades ago, "I went to the general store, but I couldn't buy anything specific." That reminds me of WordPress, Drupal, OpenERP (I doubt Odoo is any better.), etc. There is plenty more to say on this, but it will have to wait.

July 18, 2020

Some words on JavaScript var, let, const. I'll admit to still being fuzzy on some fine points, but here are some rules of thumb I've come up with that are well battle tested:

June 21, 2019

Over the last several weeks, I ran into 5 - 6 very thorny problems. Let's see if I can count them. About all I'm good for at this moment is writing gripy blog posts, if that.

My June 12 entry refers you to the drag and drop problem and the hard refresh problem. Those are 2 of the problems.

I just wrote an article on networking bridging and using MITM (man in the middle) "attacks" / monitoring. Getting both of those to work was a pain. The bridging took forever because the routing table kept getting messed up. The MITM took forever because it took me a lot of searching to find the necessity for the ebtables commands.

After I solved the Firefox problems mentioned on June 12, I ran into another one. The whole point of my "exercise" for calendar months (weeks of billable time) was to rewrite the lawyer ERP timecards such that they loaded many times faster. They were taking 8 seconds to load, and *I* did not write that code.

Load time was instant on my machine. Everything was good until I uploaded the timecard to the Amazon nano-instance. Then the timecards took 30 - 45 seconds to load. The CPU was pegged that whole time. So, I'm thinking, my personal dev machine is relatively fast. The nano instance is, well, nano. So, I figured, "More cowbell!". At a micro-instance, RAM goes from 0.5 GB to 1GB. That appeared to be enough to keep the swap space usage to near zero. No help. Small--nope: no noticable change. At medium, CPUs go from 1 to 2. Still no change. I got up to the one that costs ~33 cents an hour--one of the 2xlarge models with 8 CPUs. Still no change. WTF!?!

I had started to consider the next generation of machines with NVMe (PCI SSDs). My dev machine has NVMe, so maybe that's part of the problem. However, iotop didn't show any thrashing. It was purely a CPU problem.

So, upon further thought, it was time to go to the MySQL ("general") query log. The timecard load was so slow that I figured I might see the query hang in real time. Boy, did I ever! I found one query that was solely responsible. It took 0.13s on my machine and 46s on an AWS nano (and much more powerful). That's 354x.

The good news was that I wrote the query, so I should be able to fix it, and it wasn't embedded hopelessly in 50 layers of Drupal feces. (I did not choose Drupal. I sometimes wish I had either passed on the project or seized power very early in my involvement. My ranting on CMSs will come one day.)

I thought I isolated which join was causing trouble by taking query elements in and out. I tried some indexes. Then I looked at the explain plan. It's been a long time since I've looked at an explain plan, but I didn't see anything wrong.

My immediate solution was to take out the sub-feature that needed the query. That's fine with my client for another week or two. Upon yet more thought, I should be able to solve this easily by using my tables rather than Drupal tables. I've written lots of my own tables to avoid Drupal feces. It turns out that using my tables is a slightly more accurate solution to the problem anyhow.

One of the major benefits of using AWS is that my dev machine and the live instance are very close to identical in terms of OS version, application versions, etc. So this is an interesting example of an exponential effect--change the performance characteristics of the hardware just a bit, and your query might go over the cliff.

I guess it's only 5 problems. It seemed like more.

June 12, 2019 - a week in the life

I created a new page on some of my recent frustrations--frustrations more than achievements. We'll call it "a week in the life." I thought browser differences were so 2000s or 200ns (2000 - 2009).

March 9, 2018 - upgrading MongoDB in Ubuntu 17.10

This started with the following error in mongodump:

Failed: error dumping metadata: error converting index (<nil>): conversion of BSON value '2' of type 'bson.Decimal128' not supported

Here is my long-winded solution.

March 8, 2018 - anti-Objectivist web applications

I was just sending a message on a not-to-be-named website, and I discovered that it was eliminating the prefix "object" as in "objective" and "objection." It turned those words into "ive" and "ion." Of course, it did it on the server side, silently, such that I only noticed it when I read my already-sent message. The good news is that the system let me change my message even though it's already sent. I changed the words to "tangible" and "concern."

I have been teaching my apprentice about SQL injection and what I call the "Irish test": Does your database accept "O'Reilly" and other Irish names? This is also a very partial indication that you are preventing SQL injection. Coincidentally, I emailed a version of this entry to someone with such an Irish name. So far, sending him email hasn't crashed GMail. They probably use Mongo, though.

If you haven't guessed, what's happening in this case is eliminating "object" because it might be some sort of relative to SQL injection. I thought I've seen evidence that the site is written in PHP, but, now that I look again, I'm not as sure. This is knowable, but I don't care that much. I don't think "object" is a keyword in either PHP or JavaScript. (Yes, I suppose I should know that, too, but what If I chased down every little piece of trivia?!) In any event, someone obviously got a bit overzealous, no matter what the language.

I will once again posit to my apprentice that I don't make this stuff up.

The final word on SQL injection is, of course, this XKCD comic. I must always warn that I am diametrically opposed to some things Munroe has said in his comic. I would hope he goes in the category of a public figure, and thus I can call him an idiot-savant. Then again, he more or less calls himself that about every 3rd comic. He's obviously a genius in many ways, but he epically misses some stuff. One day, this tech blog might go way beyond tech, but I'm just not quite there yet, so I'm not going to start exhaustively fussing at Randall.

Mar 1, 2018 - LetsEncrypt / certbot renewal

This is the command for renewing an SSL cert "early":

sudo certbot renew --renew-by-default

Without the --renew-by-default flag, I can't seem to quickly figure out what it considers "due for renewal." Without the flag, you'll get this:

The following certs are not due for renewal yet:
  /etc/letsencrypt/live/[domain name]/fullchain.pem (skipped)
No renewals were attempted.

I should have the rate limits / usage quotas under "rate limits."

An update, moments after I posted this: the 3 week renewal emails are for the "staging" / practice / sandbox certs, not the live / real ones. I wonder when or if I'd get the live email? Also, I won't create staging certs again, so those won't help remind me of the live renewals again. I'll put it on my calendar--I'm not relying on an email--but still somewhat odd.

The email goes to your address in your /etc/letsencrypt/.../regr.json file, NOT the Apache config. I say ... because the path varies so much. grep -iR [addr] will find it.

Feb 2, 2018 - base62

Random base64 characters for passwords and such annoy me because + and / will often break a "word"--it's hard to copy and paste the string, depending on the context. Thus, I present base62: the base64 characters minus + and /. I considered commentary, but perhaps I'll leave that as the infamous "exercise to the reader." However, I do have a smidgen of commentary below.

Note, as of 2022/01/05, I am replacing the less-than sign of the php tag with an HTML less-than entity, because the real PHP tag disrupts the NetBeans editor. The current version if this code is now in GitHub.

Example

Assuming you call the file base62.php, give it exe permission, and execute from the Linux command prompt:

./base62.php 50
vjQBjFxJGcotOpxVJyvG1CUQ11010xigP1RyuKza120JWeFkeI

Validation

./base62.php 1000 | grep -P [ANZanz059]

That's my validation that I see the start, end, and midpoints of my 3 sets (arrays) of characters.

UQID

In the event that Google doesn't look inside the textarea, UQID: VMbAlZQ13ojI. That was generated with my brand new scriptlet. So far that string is not indexed by Google. UQID as in unique ID. Or temporarily globally unique ID. Or currently Google unique ID (GOUID?). Presumably it isn't big enough to be unique forever. 62^12 = 3 X 10^21. That's big but not astronomical. :)

somewhat-to-irrelevant commentary

What can I say? Sometimes I amuse myself. Ok. My structure is on the obtuse side. I couldn't help it. I usually don't write stuff like that. Perhaps Mr. 4.6 or one of my more recent contacts can write the clearer version. I actually did write clearer versions, but, then, I couldn't help myself.

further exercise to the reader

Perhaps someone will turn this into a web app? Complete with nice input tags and HTML5 increase and decrease integer arrows and an option to force SSL / TLS and AJAX.

installing

sudo cp base62.php /usr/bin
cd /usr/bin
ln -s ./base62.php base62
cd /tmp
base62
[output =] RyH3HjGnEalr71meSJfm

Now it's part of my system. I changed to /tmp to make sure that . PATH wasn't an issue--that it was really installed.

Reference

Jan 28, 2018 - Stratego / Probe

I'd like to recommend Imersatz' Stratego board game implementation called Probe. It is the 3 time AI Stratego champion. The AI plays against you. It's a free download; see the "Download" link on that page. From a human who is good at the game's point of view, I would call it quasi-intelligent, but it beats me maybe 1 / 7 times, so it's entertaining.

I am running the game through WINE, the Windows Emulator for Linux. I just downloaded it to make sure it matches what I downloaded to this new-to-me computer months ago. It does. Below I give various specs. Those are to make sure you have the same thing I do. It hasn't eaten my computer or done anything bad. I have no reason to think it's anything but what it says it is. In other words, I am recommending it as non-malware and fun. If it makes you feel any better, you can see this page in secure HTTP.

Probe2300.exe [the download file]
19007955 bytes
or 19,007,955 bytes / ca. 19MB
SHA512(Probe2300.exe)= e96f5ee67653eee1677eb392c49d2f295806860ff871f00fb3b0989894e30474119d462c25b3ac310458cec6f0c551304dd2aa2428d89f314b1b19a2a4fecf82
SHA256(Probe2300.exe)= ee632bcd2fcfc2c2d3a4f568d06499f5903d9cc03ef511f3755c6b5f8454c709

The above is the download file from Imersatz. In the probe exe directory, I get:

1860608 [bytes] Feb 28  2013 Probe.exe
 800611         Feb 28  2013 Probe.chm
1291264         Feb 28  2013 ProbeAI.dll

SHA256(ProbeAI.dll)= 13e862846c4f905d3d90bb07b17b63c915224f5a8c1284ce5534bffcf979537a
SHA256(Probe.chm)= 3b7be4e7933eee5d740e748a63ea0b0216e42c74a454337affc4128a4461ea6b
SHA256(Probe.exe)= 656f31d546406760cb466fcb3760957367e234e2e98e76c30482a2bbb72b0232

Jan 14, 2018 - grudgingly dealing with Mac (wifi installation)

The first time Mr. 4.6 installed Ubuntu Linux (17.10 - Artful Aardvark) on his Mac laptop (MacBook Pro?), wifi worked fine "out of the box." I think that's because he was installing Linux via wifi. This time, he used ethernet, and wifi wasn't recognized--no icon, no sign of a driver. Because he was using ethernet, maybe the installer didn't look for wifi? Maybe he didn't "install 3rd party tools"? (I asked him about that, but he was busy being excited that we fixed it. I'll try to remember to ask again.) There were good suggestions on how to fix it out there, but I derived the simplest one:

sudo apt-get install bcmwl-kernel-source

He didn't even have to reboot. His wifi icon just appeared.

For the record, that's "Broadcom 802.11 [wifi] Linux STA wireless driver source."

Thanks to Christopher Berner who got me very close. He was suggesting a series of Debian packages, but the above command installed everything in one swoop.

There are a few questions I have for 4.6 about this. Hopefully I'll get answers tomorrow or later.

Jan 3, 2018

JavaScript drag and drop

I created a JavaScript drag and drop example. I may have done it in JQuery a handful of times, but I don't remember for sure. This is a "raw" JS version--no JQuery or other libraries. I've been thinking about writing a to do list organizer which would use drag and drop. Also, I might use it professionally soon.

new-to-HTML5 semantic elements / tags

Last night, my apprentice Mr. 4.6 showed me these new HTML5 elements / tags. I remember years ago looking for a list of everything that is new in HTML5. I suspect I've at least heard of 75% of it from searching on various stuff, but I did not know about some of those tags. I would hope there is good list by now. Maybe I'll look again or 4.6 will find one.

Dec 24, 2017 - remote MongoDB connections through Robo 3T / ssh port forwarding

A new trick to my Linux book:

ssh -L 27019:127.0.0.1:27017 ubuntu@kwynn.com -i ./*.pem

That forwards local port 27019 to kwynn.com's 27017 (MongoDB), but from kwynn.com's perspective 27017 is a local port (127.0.0.1 / localhost). Thus, I can connect through Robo 3T ("the hard way" / see below) to MongoDB on Kwynn.com without opening up 27017 to the world. In Robo 3T I just treat it like a local connection except 27019. (There is nothing special about 27019. Make it what you want. Thanks to Gökhan Şimşek who gave me this idea / solution / technique in this comment. )

I used this because I am suffering from a variant of the ssh tunneling bug in 3T 1.1. (I solved it. See below.) I think I have a different problem than most report, though. Most people seem to have a problem with encryption. I'm not having that problem because this is what tail -f /var/log/auth.log shows:


I suspect the Deprecated stuff is irrelevant:

Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 16: Deprecated option UsePrivilegeSeparation
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 19: Deprecated option KeyRegenerationInterval
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 20: Deprecated option ServerKeyBits
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 31: Deprecated option RSAAuthentication
Dec 24 00:11:11 kwynn.com sshd[18675]: rexec line 38: Deprecated option RhostsRSAAuthentication
Dec 24 00:11:12 kwynn.com sshd[18675]: reprocess config line 31: Deprecated option RSAAuthentication
Dec 24 00:11:12 kwynn.com sshd[18675]: reprocess config line 38: Deprecated option RhostsRSAAuthentication
[end deprecated]

Dec 24 00:11:12 kwynn.com sshd[18675]: Accepted publickey for ubuntu from [my local IP address] port 50448 ssh2: RSA SHA256:[30-40 base64 characters]
Dec 24 00:11:12 kwynn.com sshd[18675]: pam_unix(sshd:session): session opened for user ubuntu by (uid=0)
Dec 24 00:11:12 kwynn.com systemd-logind[960]: New session 284 of user ubuntu.
Dec 24 00:11:12 kwynn.com sshd[18729]: error: connect_to kwynn.com port 27017: failed.
Dec 24 00:11:12 kwynn.com sshd[18729]: Received disconnect from [my local IP address] port 50448:11: Client disconnecting normally
Dec 24 00:11:12 kwynn.com sshd[18729]: Disconnected from user ubuntu [my local IP address] port 50448
Dec 24 00:11:12 kwynn.com sshd[18675]: pam_unix(sshd:session): session closed for user ubuntu
Dec 24 00:11:12 kwynn.com systemd-logind[960]: Removed session 284.

For the record, the error I get is "Cannot establish SSH tunnel (kwynn.com:22). / Error: Resource temporarily unavailable. Failed to create SSH channel. (Error #11)."

This doesn't seem to be an encryption problem, though, because my request is clearly accepted. MongoDB is bonded to 127.0.0.1--internal connections only--but this shouldn't be a problem because based on traceroute my system knows that IT is kwynn.com (It "knows" this in /etc/hosts). It doesn't try routing packets outside the machine.

On the other hand, this won't work in the sense that 3T won't connect:

ssh -L 27019:kwynn.com:27017 ubuntu@kwynn.com -i ./*.pem

Solution

Huh. I just fixed my problem. If I put kwynn.com in /etc/hosts as 127.0.1.1 then 3T won't work through "manual" ssh forwarding (like my command above), even if I forward as 127.0.1.1. If I put kwynn.com in /etc/hosts as 127.0.0.1, 3T works 3 ways: either through the above (127.0.0.1) OR this:

ssh -L 27019:kwynn.com:27017 ubuntu@kwynn.com -i ./*.pem

AND 3T works without my "manual," command ssh port forwarding, through it's own ssh tunnel feature, which solves my original problem. However, I'm glad I learned about ssh port forwarding.

I need to figure out what the difference is between 127.0.1.1 and 0.1. AWS puts the original "name" of the computer in /etc/hosts as 127.0.1.1 by default, and I just read instructions to use 127.0.1.1. Oh well, for another time...

December 21, 2017 - kwynn.com has its first SSL cert, Mongo continued

I'm starting to write around 11:08pm. I'll probably post this to test the link just below, then I should write more.

SSL

Kwynn.com has its first SSL certificate. You can now read this entry or anything else on my site through TLS / SSL. I have not forced SSL, though: there's no automatic redirect or rewrite.

I remember years ago (2007 - 2009??), a group was trying to create a free-as-in-speech-and-beer certificate authority (CA). Now it's done, I've used it, and it's pretty dang cool. Here are some quick tips:

my ssl.conf

Rather than letting certbot mess with your .conf, it should look something like the following. Once the 3 /etc/letsencrypt files have populated with certbot ... certonly, then you're safe to restart Apache.

I included ErrorLog and CustomLog commands to make sure SSL traffic went to the same place as non-SSL traffic.

<VirtualHost *:443>

	ServerName kwynn.com
	ServerAdmin myemail@example.com

	DocumentRoot /blah
	<Directory /blah>
		Require ssl
	</Directory>

ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined

SSLEngine  on
Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/kwynn.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/kwynn.com/privkey.pem
</VirtualHost>

That does NOT force a user to use SSL. "Require" only applies to 443, not 80. If you want to selectively force SSL in PHP (before using cookies, for example), do something like this:

    if (!$_SERVER['HTTPS'] || $_SERVER['HTTPS'] !== 'on') {
		header('Location: https://' . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI']);
		exit(0);
    }

As a critique of the above, perhaps the first term should be (!isset($_SERVER['HTTPS']) but what I have above gets rid of the warning in the Apache error log. I'll try to remember to test this and fix it later.

MongoDB continued -- partial SSL

I started to secure MongoDB with SSL / TLS, but then I noticed the Robo 3T option to use an SSH tunnel. Since one accesses AWS EC2 through an ssh tunnel anyhow, and I want access only for me, there is no need to open MongoDB to the internet. I'd already learned a few things, though, so I'll share them. Note that this is not fully secured because I had not used Let's Encrypt or any other CA yet, and I'm skipping other checks as you'll see. I was just trying to get the minimum to work before I realized I didn't need to continue down this path. See Configure mongod and mongos for TLS/SSL.

cd /etc/ssl/
openssl req -newkey rsa:8096 -new -x509 -days 365 -nodes -out mongodb-cert.crt -keyout mongodb-cert.key
cat mongodb-cert.key mongodb-cert.crt > mongodb.pem


Then set up the config file as such:

cat /etc/mongodb.conf

storage:
  dbPath: /var/lib/mongodb
  journal:
    enabled: true

systemLog:
  logAppend: true

net:
  bindIp: 127.0.0.1
  port:   27017
  ssl:
    mode: requireSSL
    PEMKeyFile: /etc/ssl/mongodb.pem

******
Then the NOT-fully-secure PHP part:

<?php
set_include_path('/opt/composer');
require_once('vendor/autoload.php');

$ctx = stream_context_create(array(
	"ssl" => array(
	    "allow_self_signed" => true,
	    "verify_peer"       => false,
	    "verify_peer_name"  => false,
	    "verify_expiry"     => false
	)
    )
);

$client = new MongoDB\Client("mongodb://localhost:27017", 
				array("ssl" => true), 
				array("context" => $ctx)
		);

$dat = new stdClass();
$dat->version = '2017/12/21 11:01pm EST (GMT -5) America/New_York or Atlanta';
$tab = $client->mytest->stuff;
$tab->insertOne($dat);

Dec 18 - MongoDB (with PHP, etc.)

I started using relational (SQL) databases in 1997. Finally in the last few years, though, I've seen a glimmer of the appeal of OO / schema-less / noSQL / whatever databases such as MongoDB. For the last few months I've been experimenting with Mongo for my personal projects. I'm mostly liking what I'm seeing. I haven't quite "bitten" or become sold, but that's probably coming. I see the appeal of simply inserting an object. On the other hand, I've done at least one query so far that would have been far easier in SQL. (Yes, I know there are SQL-to-Mongo converters, but the one I tried wasn't up to snuff. Perhaps I'll keep looking.)

I've been using Robo 3T (v1.1.1, formerly RoboMongo) as the equivalent of MySQL Workbench. I've liked it a lot. In vaguely related news, I found it interesting that some of the better Mongo-PHP examples I found were on Mongo's site and not PHP's. The PHP site seems rather confused about versions. I'm using the composer PHP-Mongo library. Specifically, the results of "$ composer show -a mongodb/mongodb" are somewhat perplexing, but they include "versions : dev-master, 1.3.x-dev, v1.2.x-dev, 1.2.0 ..." At the MongoDB command line, db.version() == 3.4.7. I don't think Mongo 3.6 comes with Ubuntu 17.10, so I'm not jumping up and down to install "the hard way," although I've installed MDB "the hard way" before.

Mostly I'm writing this because I've been keeping that PHP link in my bookmarks bar for weeks. If I publish it, then I don't need the link there in valuable real estate. Although in a related case I forgot for about 10 minutes that I put my Drupal database timeout fix on my web site. Hopefully I'll remember this next time.

Dec 17, 2017

Today's entry 2 - yet another Google Apps Script / Google Calendar API error and possible Google bug

I solved this before I started the blog and wrote about the other errors below. The error was "TypeError: Cannot find function createAllDayEvent in object Calendar." This was happening when I called "CalendarApp.getCalendarById(SCRIPT_OWNER);" twice within a few lines (milliseconds or less) of each other. The failure rate was something like 10 - 15% until I created the global. The solution is something like this:

var calendarObject_GLOBAL = false;

function createCalendarEntry(summary, dateObject) {
	var event = false;
	event = calendarObject_GLOBAL.createAllDayEvent(summary, dateObject);	
}

calendarObject_GLOBAL = CalendarApp.getCalendarById(SCRIPT_OWNER); // calendar object

createCalendarEntry('meet Bob at Planet Smoothie', dateObject123);

I'm not promising that runs; it's to give you the idea. Heaven forbid I post proprietary code, and there is also the issue of taking the time to simplify the code enough to show my point. I should have apprentices for that (hint, hint).

I was getting errors when I called CalendarApp... both inside and outside the function. I suspect there is a race condition bug in Google's code. We know the hard way how fanatical they are about asynchronicity. Sometimes that's a problem.

Yes, yes. I'm being sarcastic, and I may be wrong in my speculation. I understand the benefit of all async. But isn't part of the purpose of a blog to complain?

Today's entry 1

I just updated my Drupal database connection error article

Dec 6, 2017 - today's entry 2 - fun with cups and Drupal runaway error logs

I just discovered that /var/log/cups was using 40GB. Weeks ago I noticed cups was taking 100% of my CPU (or one core, at least) and writing a LOT of I/O. It was difficult the remove it entirely. The solution was something to the effect of removing not only the "cups" package but the cups-daemon. cups is a Linux printing process. I haven't owned a working printer in about 6 years, and I finally threw the non-working one away within the last year.

I've had the same runway log problem with Drupal writing 1000s of warnings (let alone errors) to "watchdog." It took me a long time to figure out that's why some of my Drupal processes were so slow. It seems that Drupal should simply stop logging errors after a certain number of iterations rather than trash the disk for minutes. If I cared about Drupal, perhaps I would lobby for this, but I have come somewhere close to despising Drupal, but that's another story for another time.

Dec 6, 2017 - fun with systemd private tmp directories

This happens when you just want to use /tmp from Apache, but no, you get something like /tmp/systemd-private-99a5...-systemd-resolved.service-Qz... owned by root and with no non-root permission. (Yes, yes, I have root access. That's not the point.) Worse yet, there are bunch of such systemd directories, so which one are you looking for? Yes, yes, I'm sure there is a way to know that. Also not the point. The point is: please just make it stop!

Solution (for Ubuntu 17.10 Artful Aardvark)

  1. with root permission, open for editing: /etc/systemd/system/multi-user.target.wants/apache2.service
  2. Modify this line from true to false: PrivateTmp=false
  3. run this: sudo systemctl restart apache2.service
  4. I don't think you need to restart apache (see note below), but I'm not sure. I did restart Apache, but I didn't try it without restarting Apache.

Notes

I don't even know if restarting the apache2.service is the same thing as restarting Apache or not. On this point, it is worth noting that sometimes you have to stop going down the rabbit hole, or you may never accomplish what you set out to do. Yes, I should figure out what this systemd stuff is. Yes, I should know if the apache2.service is separate from Apache. One day. Not when I'm trying to get something very simple accomplished, though. Also, yes, I understand the purpose of a root-only private directory under /tmp. Yes, I understand that /tmp is open to all. But none of that is the point of this entry.

If you can't tell, I'm a bit irritated. Sometimes dev is irritating.

For purpose of giving evidence to my night owl cred, I'm about to post at 2:24am "my time" / EST / US Eastern Standard Time / New York time / GMT -5 / UTC -5.

2017, Nov 14 (entry 5)

I did launch with entry 4.

I just took an AWS EC2 / EBS snapshot of an 8GB SSD ("gp2") volume from my Kwynn.com "nano" instance at US-east-1a. With my site running, it took around 8 minutes. The "Progress" showed 0% for 6 - 7 minutes, then briefly showed 74%, then showed "available (100%)." It ran from 2:55:34AM - around 3:03am. My JS ping showed no disruption during this time. CPU showed 0%. I didn't try iotop. (Processing almost certainly takes place almost if not entirely outside of my VM, so 0% CPU makes sense.)

This time seems to vary over the years and perhaps over the course of a day, so I thought I'd provide a data point.

Entry 4 and launch attempt 2

I wrote entries 1 - 3 at the end of October, 2017, but I have not posted this yet. I'm writing this on Friday, November 10 at 7:34pm EST (Atlanta / New York / GMT -4). I mention the time to emphasize my odd hours. See my night owl developer ad.

I'm writing right now because of my night owl company (or less formal association) concept. My potential apprentice whom I codenamed "Mr. 4.6 Hours" has been active the last few days. I'd like to think I'm getting better at the balance between lecturing, showing examples, and leaving him alone and letting him have at it. I think he's making progress, but he's definitely making *me* think and keeping me active. Details are a longer story for another time. Maybe I'll post some of my sample code and, eventually, his code.

He's not around tonight, and I miss the activity. As I said in the ad, I'd like to get to the point that I always have a "green dot" on Google Chat / Hangouts or whatever system we wind up agreeing on.

Based on the last few days, I have a better idea of how to word my ad and the exchange I want with apprentices. Perhaps I'll write that out soon.

dev rules 1 and 2

Rules 1 and 2 and in entries 1 and 3, respectively, below.

Rules 3 and 4 are way "above" / later.

Entry 3: dev rule #2

My first GAS and perhaps the 2nd, if it is indeed a server problem, bring up my rule #2:

Kwynn's software dev rule #2: always host applications on a site where you have root access and otherwise a virtual machine--something you have near-total control over. It should be hard to distinguish your control of the computer sitting next to you versus your host.

Amazon Web Services (AWS) meets my definition. AWS is perhaps one of the greatest "products" I've ever come across. It does its job splendidly. When they put the word "elastic" (meaning "flexible") in many of their products, they mean it.

Others come close. I used Linode a little bit; it's decent. I have reason to believe Rackspace comes close. I am pretty sure that neither of them, though, allow you to lease (32-bit) IP addresses like AWS does. I am reasonable sure getting a 2nd IP address with Linode or Rackspace is a chore--meaning ~$30 and / or human intervention is involved, and / or a delay. With Amazon, a 2nd IP address takes moments and is free as long as you attach it to an (EC2) instance.

This rule is less absolute than #1. Violating always leads to frustration, though, and wasted time. Whether the wasted time is made up for by the alleged benefits of non-root hosts is a question, but I tend to think not. I've been frustrated to the point of ill health, though--one of the very few times I've *ever* been sick. That's a story for another time, though.

If it's not clear, using GAS violates the rule because of the situation where there is nothing you can do. I had some who-knows-the-cause problems with AWS in late 2010, but I've never had a problem since. If, heaven forbid, I did have a problem, I could rebuild my site in another Amazon "availability zone" pretty quickly. As opposed to just being out of luck with GAS.

Why I violate the rule with GAS is another story, perhaps for another time. I'll just say that if it were just me, I'd probably avoid GAS. With that said, some time I should more specifically praise some features of GAS as it applies to creating a Google Doc. I was impressed because given the business logic limitations I was working with, GAS was likely easier than other methods.

Entry 2: Google Apps Script and StackOverflow.com

I've been considering a blog for months if not years. I finally started because of this problem I'm about to write about.

This blog entry deals with both the specific problem and a more general problem.

The specific problem was, in Google Apps Script (GAS), "Server error occurred. Please try saving the project again". The exact context doesn't really matter because if you come across the problem, you know the context.

I spent about an hour chasing my tail around trying variations and otherwise debugging. At some point I tried to find info on Google itself. Google referred "us" to StackOverflow.com (SO) with the [google-apps-script] label. Google declares that to be the official trouble forum. As it turned out, someone else was having the same problem. I joined SO in order to respond. Then roughly 4 others joined in. We were all having the same problem, and nothing we tried fixed it. I am 99% sure it was a Google server problem and there was nothing we could do. The problem continued during that night. Then I was inactive for ~14 hours. By then, everything worked.

The more general problem I wanted to address is the way SO's algorithms handled this. The original post and my response are still there several weeks later. However, others' perfectly valid responses were removed. To this day, SO still says, "Because [this question] has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site..."

This sort of algorithmic failure troubles me. I'd like the memory of those deleted posts on the record.

I was motivated to write about this because I encounted another GAS error a few hours ago that I once again suspect is a server error. This time, I was the one who started the thread. 2 hours later, no one has answered. I'm curious how this turns out. I'm not linking to the thread because it's still possible I caused the problem. Also, I'm not linking to it because Google almost immediately indexed it, so SO is the appropriate place to go.

Entry 1: dev rule #1

Kwynn's Software Dev Rule #1: Never develop without a debugger. You will come to regret it. To clarify terms, by "debugger," I mean a GUI-based tool to set code breakpoints, watch variables, etc. Google Chrome Developer Tools "Sources" tab is a debugger for client-side JavaScript. Netbeans with Xdebug is a debugger for PHP. Netbeans will also work with Node.js and Python.

It is tempting to violate this rule because you think "Oh, I'll figure it out in another few minutes."

Another statement of this rule is "If you're 'debugging' with console.log or print or echo, you're in big trouble."

page history

HTML5 valid