Complex (2)

I have generally found NoSQL to be a disaster. Like agile processes, it allows you to dispense with certain disciplines, but for use at scale and over time it requires you to engage in substitute disciplines. Too often these are not practiced. From a recent work chat with minor adaptation:

Data hygiene is crucial. I wouldn’t be opposed to broader NoSQL/JSON use if we used JSON schemas wherever appropriate, but at that point it is probably simpler to flatten the data into tables.

A good schema is a species of defensive code; e.g., you can have higher confidence that the value you are reaching for is actually there no matter how old the document.

See also: Complex

Complex

I’ve joked for awhile that EDR and similar systems like CrowdStrike or Carbon Black would become Skynet. Or, more likely, a tool of international espionage and cyber–warfare. It doesn’t feel good to be vindicated. (Now imagine someone accomplishing this with a major browser or password manager application.) Complex and highly interconnected systems are difficult to make stable, resilient, or secure; and cannot possibly be made anti-fragile. (This is a lesser reason why I miss my old pickup truck.) I’m not very excited about Kubernetes for the same reason. It’s also partly why I’m not very excited about artificial intelligence; but additionally because analysis of data, whether by machine or by human, does not automatically involve either wisdom or decisiveness (see also Edwin Friedman and Nassim Taleb).

Crossposted on I gotta have my orange juice.

Occam’s razor

On my nontechnical blog I reflect on simplicity:

I’ve been appealing to Occam’s razor lately as a rule for evaluating architectural decisions and their tradeoffs. In particular, architectural decisions must take into account not only ideal considerations, but also a team’s capacity to develop, maintain, and support these decisions. Simplicity has its own rewards regardless of the size of your team, but the smaller the team, the more aggressively you must press for that simplicity. Don’t multiply entities unnecessarily!

Rest

Rest

We considered how stress and self-discipline result in growth and strength, whether that is physical, mental, emotional, or spiritual. However, an important corollary of this is that intervals of rest are needed so that we are able to recover stronger instead of ending up progressively worn down.

From nature and our own experience we can see that this rest needs to happen on several cycles. There is a daily rest (1/3 of our time is spent sleeping), a wise principle of weekly rest (one day out of seven), and a yearly rest (winter, vacations). We could even consider the wisdom of longer cycles of rest (e.g., taking sabbatical every 7 to 11 years as many universities practice for their faculty, and as Intel has done).

These principles apply not only to organic life but also to organizations. While agile principles and techniques do increase team efficiency and productivity, it is a mistake to think that agile’s goal is continuous apparent productivity. There are a number of shatterings of continuous apparent productivity that are necessary to healthy agile product development. It is important to brainstorm, learn, conduct retrospectives, take time to refactor, experiment and evaluate alternatives . . . and also to rest. Paradoxically, all of these ways of taking time to slow down often help to improve your team’s long-term productivity.

Obviously our individual daily, weekly, and annual cycles of rest help with the health of our agile team. But the team itself should also be engaging in rest. There are many possibilities here, including team outings and shared meals, team training, and planning for gap sprints or gap weeks to focus on lighthearted or experimental work (what if I rewrote this in Clojure, Haskell, or Racket). In keeping with the spirit of agile, the team should evaluate its own need for rest and plan appropriate kinds of rest.

Crossposted to I gotta have my orange juice.

Travis and Pylint

For awhile my team has had Travis setup to run Pylint (as well as several other lints) against our code base. However, because we didn’t start this practice from the beginning, the number of warnings was a bit daunting. We told ourselves that we would fix this over time, and set our script to always return 0 so that Travis would be happy.

Then I read: Why Pylint is both useful and unusable, and how you can actually use it. I was inspired by this to try my hand at reducing Pylint’s scope. However, I took a different approach. Instead of disabling all checks and enabling them incrementally, I adjusted our script to check only for fatal and error findings in Pylint. Pylint encodes in its exit status what levels of messages were issued.

Here is my approach:

# Fail Travis build if Pylint returns fatal (1) | error (2)
if [ $(($rc & 3)) -ne 0 ]; then
    echo "Pylint failed"
    exit 1
else
    echo "Pylint passed"
    exit 0
fi

The number of errors found by Pylint was much more manageable than the full set of messages it produced. We were able to correct these problems easily, and move to addressing warnings and other messages incrementally over time.

Ansible Playbook for Wekan

Ansible Playbook for Wekan

My team is experimenting with using open-source tools deployed internally for Kanban cards.

One tool we are exploring is Wekan, formerly known as Libreboard.

I deployed a Wekan instance to a RHEL 7 virtual machine for our testing. For this deployment, I wrote a simple Ansible playbook with a few additional configuration files (nginx config, Node PM2 configuration), in case there is ever a need to re-deploy the instance.

You can find my playbook and associated files on Github: wekan-setup. The files are as follows:

  • purekan.yml—Ansible playbook
  • wekan.yml—Node PM2 configuration
  • wekan.conf—nginx proxy configuration

You’ll need to customize things slightly based on your domain name or if you are using a distribution other than RHEL.

Slow down to speed up

The next maneuver that Mott proposed for his pupils was most complicated: ‘Randy, you’re in Gemini down here and you must intercept Agena-B up here. Don’t go for the target. Burn straight ahead. Go faster to go slower.’

‘Doc, I understand that part. But what in hell do I do next?’

‘As you move to a higher orbit, your speed will begin to drop off, and believe it or not, if you obey the figures your computer will be feeding you, you’ll bring your Gemini right in behind the target. The computer will time it so that when you’re about a hundred feet apart, your speeds will be identical, and so will your orbit. Then your twelve-year-old son could dock the two vehicles, because you can make the Gemini move a quarter of a mile an hour faster than the Agena, ease it into a docking.’

Claggett and Pope looked at each other, and the former said, with his bright Texan grin, ‘Like the pretty girl said in My Fair Lady, “I think I got it.”’

‘I believe the professor said that,’ Mott said. ‘And to protect you,’ he added, ‘we always time the docking so it occurs in the daylight part of the orbit.’

‘Real thoughtful,’ Claggett said.

‘So think about the orbits tonight. Memorize the diagram. And keep telling yourself, “To go faster, I must go slower.”’ (James Michener, Space, 502)

You can envision this somewhat counter-intuitive aspect of orbital mechanics in a couple of different ways. First, you can think of the higher orbit as stealing your kinetic energy (slowing you down) and converting it into potential energy (increasing your altitude). You can also picture it in terms of Kepler’s second law, which states that an orbit sweeps out equal areas in equal time. The higher your orbit, the more area your radius is sweeping across, and so your momentum becomes slower in order to compensate.

orbit2This notion of speeding up to slow down (and conversely slowing down to speed up) is useful for considering how organizations and projects scale. When your project (program, product, etc.) is small and your customer base is small, you can and do move very quickly. If you are a programmer, it is possible to rewrite your entire codebase in a few weeks. At this stage, organizational disciplines can feel like unnecessary drag on this perceived speed. But if you have fallen into the trap of thinking that way, as your project and customer base grow, the lack of discipline actually becomes a drag on your ability to make further change. Over time, with a continued lack of discipline, both the chance of error and the so-called blast radius of that error become larger. You are slowed down both by your fear of error, and by the cost of having to correct such errors. Being thus slowed down, how can you speed back up?

Certainly this sort of breakneck early speed may be a legitimate tradeoff. The term technical debt was coined to help articulate the fact that such a tradeoff is being made for the sake of present benefit against future cost. But as an organization or project grows, the need to pay down this technical debt becomes more and more critical. And this is where notions such as “paying off debt” and “slowing down to speed up” can help us to feel perfectly at ease with the time and effort we are spending on organizational disciplines that do not obviously contribute to more features and more customers.

For example: if I have “slowed down” by taking the time to create a large battery of automated tests for my codebase, then I am actually able to “speed up” in making further changes to that codebase, because I have a higher degree of confidence not only that my changes will not break existing behaviors, but thereby that my changes will not result in added support costs by exposing my customers to bugs.

To paraphrase the wise man: better is he who has self-governance than he who, without it, governs a city.

And keep telling yourself, “To go faster, I must go slower.”

See also: The Start-Up Trap