Time in Databases

Is something in your database dependent on time? If you think not, think again. I can assure you there are plenty of such things. But, as plentiful as your time-dependent objects are, as plentiful are the creative ways I’ve seen them handled. Trust me, when you screw up time, the failures of your implementation will be felt, painfully. This is, however, understandable given the complexity of time and its limited treatment in commonplace database literature. This article aims to introduce a terminology together with some best practices and considerations that should be addressed before implementing time in a database. It is inspired by the article “Kinds of Time” by Christian Kaul, and likely has significant overlaps, but provides my slightly different view.

Primary and Documentary Times

In essence there are two purposes time can serve in a database. Time can be of a primary nature or of a documentary nature. Time of a primary nature is part of your primary keys, and your database engine will, if modeled accordingly, automatically ensure temporal integrity with respect to it. Time of a documentary nature are data points that are of a time type, like a date, but that are not part of your primary keys. If you need any constraints imposed over your documentary time, you will have to build and maintain them yourself.

For integrity reasons, any primary time values must be comparable in such a way that they form a total order. Time of day, such as 12:59, cannot be used as it will repeat itself daily, giving you no option to determine if two instances of 12:59 coincided or happened in some succession. Because of this requirement, primary times are often expressed through some calendar convention, such as Julian dayUnix time, or perhaps most commonly ISO 8601, which even accommodates for leap seconds. It is worth noting that any time that is affected by daylight saving is not totally ordered. In Sweden the hour between 02:00 and 03:00 on the last Sunday of October is repeated every year. Even so and unfortunately, I see many databases here use local time as primary time.

A decent choice for a primary time would therefore be coordinated universal time (UTC). Expressed in ISO 8601, such a time looks like 2021-01-25T07:23:47.534Z. While this may look satisfactory, there is an additional concern. The precision of the data type used to store this time in the database may debilitate the total ordering. Somewhat surprisingly, and often nastily discovered, the precision of a datetime in SQL Server is 3 milliseconds. The final digit in a time expressed as above can only be 0, 3 or 7 in the database. While this particular choice is unintuitive, there is always a shortest time span that can be represented through a data type, called its chronon. For primary times, a data type with a chronon shorter than anything happening in succession is necessary to preserve the total ordering.

Given that primary times are parts of primary keys in the database and altering primary keys is normally time-consuming, the choice of data types should be made with care. Always picking the data type with the smallest chronon, such as datetime2(7) in SQL Server with a 100 nanosecond chronon, may affect performance. While it can store a time like 2007-05-02T19:58:47.1234567 it will use 8 bytes, compared to 3 bytes for the date type, if daily changes are sufficient. Keeping primary keys small should be paramount for any database designer, since smaller keys lowers total storage and increase insert and join performance.

Documentary times are not required to have a total ordering or even be temporally consistent, making it possible for versions overlapping in time. With so much leniency choices can be made with much less consideration. Naturally, there are cases when you want to impose the same restrictions to documentary times, particularly if you intend for them to behave as primary times at some point.

Particular Recurring Timepoints

There are some particular recurring timepoints of interest, and for some reason beyond my understanding there is no standardised way to express these. Some common ones are:

  • The end of time.
  • The beginning of time.
  • Indefinitely.
  • At an unknown time.

The end of time is what it sounds like, the infinite extension of time into the future. An application for this would be if you want to express a fact such as ‘I will love you forever’. Similarly, the beginning of time is the longest possible extension of time into the past. It could be applied in an expression such as ‘gravity has always been present in the universe’. Indefinitely is similar to these, but in this case we expect an actual point in time will come to pass after which a time interval is no longer open-ended. An application, with the slight but important difference from ‘forever’ is ‘I will cherish rock music until the day I die’ or ‘my hair will turn gray one day’. Finally, there is the unknown time. It can be used both for past and future events, such as ‘The price was raised, but nobody remembers when that happened’ and ‘We will raise the prise the next time crops fail’.

From a storage perspective, databases normally provide one special value; NULL, that is (somewhat horrifyingly) often used for all purposes above. Practically one could possibly reason that unknown time could be used in place of indefinitely, which in turn could be used in place of the beginning and end of time. Semantically, some important nuances will then be lost. For example, the nuance lost by stating ‘I will love you until an unknown time’ may yield an entirely different outcome.

Ideally, and if your database permits user-defined types, data types which includes and separates these particular timepoints should be implemented. ISO 8601 should also be extended with ways to express these notions. There is an interesting discussion on how to express these by shema.org here, for anyone who wants to dive deeper, which suggests that standards may be coming. Regardless, you should consider how you intend to manage particular timepoints like these.

Named Timelines

Even if there is just one single time, there are many timelines. A timeline can be thought of as an interval of time (finite or infinite) over which events happen in a temporally consistent sequence. If two events can mess up each others bonds in time, such as one moving the other in time, then they definitely do not belong on the same timeline. For example, if I have an appointment in my calendar between 9:00 and 10:00 today it lives on a different timeline from the action of me, at 08:00, rescheduling it to the afternoon. Timelines can also be separated by the fact that the events they track pertain to completely different things, and it would only decrease readability and understandability to keep them together.

Borrowing the terminology of transitional modeling, following are some examples of timelines commonly discussed in computer science and database literature. There is so little consensus on the naming of these so understanding what they represent is what matters.

Appearance Time

Appearance time is the point in time when some value was observed, became valid, or will come into effect in real life. It tracks the natural progression between values or states, both for attributes and relationships. Note that appearance times may lie in the future, such as an already known price cut coming into effect on Black Friday.

In literature it is known by many different names: Valid time [Snodgrass], Effective time [Johnston], Application time [ANSI SQL:2011], and Changing time [Anchor modeling]. I also recall hearing these synonyms from forgotten sources: Utterance time, State time, Business time, Versioning time, and Statement time.

Assertion Time

Assertion time is the point in time when some statement is subjectively assessed with respect to its certainty. In the simple case this is done by some system acting as the asserter and statements evaluating to either true or false. It is commonly used to track the correction or deletion of values or states, both for attributes and relationships. Note that assertion times cannot lie in the future. If someone corrects the rebate for the upcoming price cut on Black Friday, this correction necessarily happens in the present.

In literature it is also known by many different names: Transaction time [Snodgrass], Assertion time [Johnston], System versioning time [ANSI SQL:2011], and Positing time [Anchor modeling]. I have heard less synonyms here from forgotten sources, only Falsification time and Evaluation time comes to mind.

For further reading on how to make uncertain assertions, to even being sure of the opposite, there is more information on transitional modeling in this series of articles.

Recording Time

Recording time is the point in time at which information is stored in some kind of memory, typically when the data entered the database. This is very useful from a logging and later maintenance perspective. With it you can keep track of how quickly your database is growing on a per object basis, or revert to previous states of the database, perhaps after an erroneous load. It could have been the case that I sent all the price cuts for Black Friday into the production database but associated with the wrong products due to a faulty join.

In literature there are a couple of other names: Inscription time [Johnston] and Load date [Data Vault]. A very poor synonym I’ve seen used is Transaction time, which should be reserved for assertion time alone.

Structuring Time

Structuring time is the point in time at which the information had a certain structure. Yes, structure changes over time too. This process is referred to as schema versioning in literature, but few mention keeping a named time line for tracking when structural changes happened. If someone comes asking why there were no price cuts for Black Friday last year, you can safely assure them that ‘price cut’ was not part of your information structure at the time.

The only other name I have seen is Schema Versioning Time, but it has a too technical ring to it, in my opinion.

Unnamed Time

Unnamed time are all the points in time that do not fall within any of your named timelines. There will be values in your database that are of a time type, but that you are likely to not put onto named timelines. A typical example would be the point of time the receipt for the stuff I bought on Black Friday was printed. You are not likely to name the timeline on which birth dates occur either.

In literature there are a couple of other names: User defined time [Snodgrass] and Happening time [Anchor]. Again, I’ve seen Transaction time used for unnamed times when the time point represents some event in which a transaction took place. Again, an unfortunate confusion of terminology.

Time Tracking Scope

Before implementing time in your database, you need to consider which of the timelines above and possibly others you will need, since each of them will need to live in their own separate timeline. Along with that you will also need to determine your time tracking scope. For example, is it sufficient to track changes to any part of an address or do you need to track changes of the individual parts of an address?

If tracking any change is sufficient, you can use a single point in time for the entire address. Essentially, you will be viewing a changed address, regardless of which part changed, as a new address. If you track the individual parts you will need several points in time, one for the street, one for the postal code, one for the state, and so on. In this case the same address can have different postal codes over time.

The latter approach, tracking time for every single object (attribute and relationship) can be achieved through modeling in the sixth normal form, henceforth 6NF. With it change is visible without having to make comparisons with previous rows and no data is duplicated when only a part of something is changing.

Even if you do not go as far as 6NF your time tracking scope has to be decided, since the amount of time points you will store depend on it. Unfortunately, in many of the source systems I regularly fetch data from, there is usually just one column named “modified date” which is documentary. In other words you can only tell something has changed and when, but not exactly what or what came before it. In these situations you can, with a proper data warehouse, provide the history the sources lost.

Orthogonality

If you have an implementation that keeps track of both appearance and assertion time, this is usually referred to as a bi-temporal implementation. The reason is that events in appearance time are in a sense orthogonal to events in assertion time. It is possible for the same value to appear and to be asserted simultaneously, but also at different times, so a single time point is not sufficient to describe both events. Furthermore, what value appears may be retroactively corrected by a later assertion. When a value appears may be also modified by an assertion. Keeping both of these on the same timeline, if you think of it as storing the date and time in a single column in a table, would cause collisions and ambiguities.

When appearances and assertions are easy to tell apart, using two different time points to describe these may be complex but straightforward. Problems usually arise when you are faced with a different value but nobody can tell whether it is a correction of the existing value or supposed to replace it from some point in time. This may lead to corrupt data if the wrong assumptions are made. Another issue is the fact that if you want a bi-temporal implementation with both appearance and assertion times as primary times, a single table with a single primary key cannot guarantee temporal integrity. This requires careful modeling, and only a few modeling techniques have this as a “built-in” feature.

Proxying

Some of the most confusing aspects of time in databases come from the use of proxying, whether deliberate or unknowingly. If we assume that I have decided to keep track of appearance, assertion, recording, and structuring time in my database, with 6NF time tracking scope, then I am very much all set for anything thrown at me from a querying perspective. However, that is under the assumption that all of those time points will be available to me when I put data into my database.

Sadly, this is often not the case. This is true both for operational systems and data warehouses. Getting information like [Using the Megastore structure as of January 5th (The database recorded on Monday 10:12:42 that ‘The manager asserted with 95% certainty on Monday at 09:15 that “The price cut will be 25% starting at midnight on Black Friday”‘)], actually never happens, yet. We do get some of the information some of the time though.

If we are in control of the database, we will always know when data is entering it. This opens up an opportunity. In the case that we do not know the assertion time, say we only get “The price cut will be 25% starting at midnight on Black Friday”, we can approximate it with the recording time. In this example that means missing the mark by almost an hour. As unfortunate as this is, sometimes it is the only option.

Somewhat more dangerous, but also doable, is approximating appearance time with recording time. Let’s say we only get “The price cut will be 25%” and we approximate it with the recording time we will be dropping the price several days too early. Since recording time always happen in the present, take utmost care when using it as an approximation for appearance time. Still, this may sometimes also be the only option available.

Here within lies the big fallacy though. When enough approximations have been done, the different timelines become hard to distinguish, and it seems like you can use these time points interchangeably. This is not the case. You should always strive to get hold of the times when they are available and if proxying is necessary, and only as a last resort, then structure your loading intervals accordingly, to minimise the damage done.

Comparing Data Vault and Anchor

So far we have talked about time in databases from a theoretical perspective. There are two modeling techniques I would like to take a practical look at, taking diametrically different approaches to which timelines serve what purposes. The two techniques Anchor modeling and Data Vault are related, both being forms of Ensemble modeling, but still have many differences.

Anchor modeling utilises 6NF to provide as granular time tracking scope as possible. It designates appearance and assertion time as primary timelines, while recording is documentary. It also maintains separate metadata for the information structure in which structuring time is primary. By treating appearance and assertion time as primary, the database engine will ensure bi-temporal integrity. However, that needs both to be present and have functionally adequate approximations when necessary. Anchor also makes the assumption that values are exhaustive, such that an existing value cannot become NULL, and must instead be explicitly marked as “Unknown”.

Data Vault is similar to Anchor, but is not 6NF and instead groups attributes together into Satellites, for which a single point of time is used to track all changes within. It also, at least in some variations, allow for attributed relationships, in which attribute values and foreign keys reside together in a Link. If a single point of time is used to track all changes within the Link, it is not possible to look at a row and determine if an attribute value changed or the relationship changed. The big difference is that Data Vault uses recording time as primary and both appearance and assertion time is documentary. I do not believe it has a notion of structuring time in its standard.

The advantage of Anchor is that you do not have to worry about temporal integrity after the data has entered the database. Integrity is also practically a requirement if you want to use the technique outside of data warehousing. Anchor was designed to be a general modeling technique and it is applied in several operational systems. The downside is that you need trustworthy time points, which can require a lot of effort and digging in the sources. Values in a source that once existed and suddenly are NULL could pose a problem if they are indeed suddenly “Unknown” and your data type does not support it to be explicitly specified. This has, in my experience, very rarely happened, and almost always the NULL means ‘deleted’, as in asserting the statement as false, which is a different thing and handled without problems. Analysts find it easy to work directly with Anchor models, thanks to it being able to serve data as it appears at or as it was asserted at without any additional work than finding the correct bitemporal time slice.

The advantage of Data Vault is that you do not have to worry at all about temporal integrity at load time. For auditing purposes, it will reproduce inconsistencies in the sources perfectly, so if you need to provide auditing and validation reports it is an excellent choice. Since Data Vault focuses specifically on data warehousing, it is also less restricted in its choice of primary timelines. However, using recording time, the temporal integrity of the now documentary appearance and assertion time will likely have to be taken care of later. I do believe that if any business users are going to be using the data, this must be done at some point. In the end the same amount of work will likely have to be done both in Anchor and Data Vault, but with additional layers in the latter. Looking at Data Vault and its choice of recording time as primary it looks like an excellent choice for a persistent staging layer, with the usually recommended Dimensional model on top as the presentable part of the data warehouse.

In my opinion both are valid options. If you like many layers, using different modeling techniques, distributing a fixed total amount of work over them, then Data Vault is a good choice. If you do not want layers, and stick to a single modeling technique, doing a fixed total amount of work for that single layer, then Anchor is a good choice. Both have been proven in practice, also for Big Data, but Data Vault has many more implementations to date.

Imprecision and Uncertainty

Going forward I am doing active research on transitional modeling, in which two other aspects of time is also considered. First there is imprecision. There is no way to measure time with perfect accuracy, so all time points are imprecise to some degree. In an atomic clock this imprecision is minuscule, but not insignificant. Regardless, there are events whose boundaries are hard to determine. Like when I got married. When exactly did that happen?By using fuzzy data types, intervals, or margins of error, we can actually express imprecision in databases. There are open questions on how to address the total ordering if we allow imprecise points of time in our primary timelines. Is it possible to maintain temporal integrity with imprecise values, or will we have to treat everything as documentary, and later apply some heuristics with best guesses?

The other aspect of time is uncertainty, which is not the same thing as imprecision. Certainty is a subjective measure, in which a statement is assessed with a “probability to be true”, loosely speaking. Using certainty it is actually possible to assert that you are certain of the opposite of a statement. This takes away a hard problem of storing ‘opposite values’ in a database by instead storing a negative certainty. Taking my marriage, if I look at “Lars was married on the 19th of June 2004” I can assert with 100% certainty that it is true, even if the time is imprecise enough to pin it down to a whole day. Looking at “Lars was married between 15:00 and 16:00 on the 19th of June 2004” I may actually be less certain, and assert it with 50% certainty, since I don’t exactly remember if it was one hour earlier or not. There are some open questions on when you contradict yourself if values are imprecise and you make several (vague) assertions. If values are precise, there is an exact formula by which you can calculate exactly when you contradict yourself.

Conclusions

Hopefully I have not made time all too confusing compared to the post of Christian that inspired me. I do believe that time in databases is a complex matter, but that should be digestible for everyone, given that we can put ourselves on some common ground. All the different terminology and poor implementations out there definitely does not help.

It’s time to treat time more seriously.

Representing Large Networks by HIERARCHYID Chunks

If you recall, I wrote about “Polymorphic Graph Queries” a while ago. This exemplified the use of HIERARCHYID to represent the topology of a small computer network. As it turns out, there is a case in which the HIERARCHYID approach will explode in both numbers and size, making them an unwieldy choice, and it’s commonly seen in large networks. There is however a way to work around that issue. As far as I can tell, the graph tables in SQL Server still do not support polymorphic queries, so this workaround should be valuable.

Assume that we have a reasonably large computer network, with say a million or more devices. Representing the entire topology of the network efficiently turns out to require a combination of HIERARCHYID and traditional relational tables. HIERARCHYID performs well all the way down from locations, through enclosures, devices, and ports or antennas to the actual communication media (fiber, ethernet, wireless). Because of the large number of things connected to this layer, this is where they become unwieldy and explode in numbers. HIERARCHYID does not work well when you have intermediate layers with comparatively massive amounts of connections. Such a scenario could easily bring you into needing billions of HIEARCHYID:s. Storage skyrockets and performance goes down the drain.

Instead, by having a traditional many-to-many table represent such layers, in which different HIERARCHYID:s are related to each other, it is possible to get the best of both worlds and achieve the ability do sub second searches through the topology. Let’s call the structure (UID, HIERARCHYID) a chunk, where the UID can typically be an integer. The relational table can then be as simple as (UID, UID) indicating that two chunks are connected, only requiring as many rows as there are connections. Polymorphic queries now need to take this into account, by first finding a number of candidate chunks, then join these through the relational table to discard ones that are not connected, which yields the final result.

A similar recursive query used for testing a relational parent-child hierarchy of the same network had to be stopped after having run for several hours. The benefit of HIERARHYID is substantial, but only if you take special care of layers with high connectivity. For small uncomplicated hierarchies, like employees and managers at a company, a traditional representation with less complexity is likely sufficient. Some alternatives can be found in “Hierarchical Data in SQL” by Ben Brumm.

PostgreSQL 12 and Editing en masse

Thanks to the great work of Juan-José van der Linden, a fresh PostgreSQL generator is taking form in the test version. He also added data type conversions between the available target databases and lists with suggested datatypes that simplify entering types in the interface. Before this work was added we decided to release version 0.99.6.3, so that it will remain stable. This minor version, albeit while in test, has been used in production for quite some time at our clients.

PostgreSQL generation in the test version.

On top of that we have also modularized the code, fixed a few bugs, and implemented some long standing pull requests. A so far rather rudimentary, but useful, editor has been added. This allows editing of an anchor and all of its attributes in the same view. Bring it up by pressing Shift+E on your keyboard while hovering over the desired anchor.

Editing en masse with some newly created attributes.

Tinker Take Two

I bought another Raspberry Pi 4B to replace an old 3B+ that did not want to play along any more. It had been acting as a web server, so it will need less software than the job scheduling server. The old server had been running Raspbian, but I am so satisfied with Alpine that I decided to switch, so I followed the first tinkering guide, but only installed:

apk add nano nodejs npm screen sudo

I only need nodejs, since that is what is used for the web server. After that I wanted to harden the system, but it turns out that ufw has moved to the edge community repository. In order to activate it edit /etc/apk/repositories.

nano /etc/apk/repositories

Add a tag named @community for it, and if you like me want the kakoune text editor then also add @testing, making the contents look as follows.

#/media/mmcblk0p1/apks
http://ftp.acc.umu.se/mirror/alpinelinux.org/v3.12/main
#http://ftp.acc.umu.se/mirror/alpinelinux.org/v3.12/community
#http://ftp.acc.umu.se/mirror/alpinelinux.org/edge/main
@community http://ftp.acc.umu.se/mirror/alpinelinux.org/edge/community
@testing http://ftp.acc.umu.se/mirror/alpinelinux.org/edge/testing

Update to get the new package lists, then add and configure ufw.

apk update
apk add ufw@community
rc-update add ufw default 
ufw allow 2222 
ufw limit 2222/tcp
ufw allow 80
ufw allow 443

After that I followed the guide to disallow root login, enable ufw, and reboot, with one exception. When editing sshd_config I also changed to a non-standard port to get rid of most script kiddie attempts to hack the server. Find the line with:

#Port 22

and uncomment and change this to a port of your liking, for example:

Port 2222

Trust by Certificate

After logging in as the non-root user I created when following the guide, I can still switch to root by using su. I need to add certbot, that keeps the certificate of the server up to date and restore the contents of the www folder.

su
apk add certbot@community
cd /var
mount -t cifs //nas/backup /mnt -o username=myusr,password=mypwd
tar xvzf /mnt/www.tar.gz

Now when that is in place it’s time to update the certificates.

certbot certonly

Since I haven’t started any web servers yet, it’s safe to select option 1 and let certbot spin up it’s own. After entering the necessary information (you probably want to say “No” to releasing your email address to third parties), it’s time to schedule certbot to run daily. It will renew any certificates that are about to expire in the next 30 days.

cd /etc/periodic/daily
nano certbot.sh

The contents of this file should be (note that Alpine uses ash and not bash):

!/bin/ash
/usr/bin/certbot renew --quiet

After that, make that file executable.

chmod +x certbot.sh

With that in place I can start my own web server. It’s an extremely simple static server. The Node.js code uses the express framework and is found in a script named static.js with the following contents.

var express = require('express');
var server = express();
server.use('/', express.static('static'));
server.listen(80);

The HTML files reside in a subdirectory named “static”. For now I run the server in a screen, but will likely add a startup script at some point.

Superuser Do and Terminal Multiplexing

Since the server will listen on the default port 80 I need sudo privileges to start it. The recommended way is to let members of the wheel group use sudo. Depending on what you picked for a username, exemplified by “myusr” here, run the following.

echo '%wheel ALL=(ALL) ALL' > /etc/sudoers.d/wheel
adduser myusr wheel
exit
whoami
exit

The exit will return you to your normal user, from being root since “su” earlier. The second exit will end your session and you will have to log in again, in order for the “wheel” to stick.

screen
sudo node static.js

This will run the server in the foreground, so to detach the screen without cancelling the running command, press “Ctrl+a” followed by “d”. To check which screens are running you can list them.

screen -ls

This will list all screens:

There is a screen on:
3428.pts-0.www (Detached)
1 Socket in /tmp/uscreens/S-myusr.

In order to reattach to one of the listed screens, you do so by it’s session number.

screen -r 3428

Encrypted Backup to the Cloud

I will be hosting some things that I want to have a backup of, and this web server will not be running on a separate subnet, so my NAS is not accessible. I’ll therefore be backing up to OneDrive (in the cloud) using rclone. You will need access to rclone on a computer with a regular web browser to complete these steps. For this, I download rclone on my Windows PC. I will elevate privileges using su first.

su
apk add curl bash unzip
curl https://rclone.org/install.sh | bash

With rclone installed it is time to set it up for access to OneDrive.

rclone config

Select “New Remote”, and I named mine “onedrive”, then choose the number corresponding to Microsoft OneDrive. Leave client_id and client_secret as blanks (default values). Select “No” to advanced config and again “No” to auto config. Here is where you will need to follow the instructions and move to your computer with the web browser to get an access_token. Once this is pasted back into the config dialogue next select the option for “OneDrive Personal”. Select the drive it finds and confirm it is the right one and confirm again to finish the setup. Quit the config using “q” and test that the remote is working properly.

rclone ls onedrive:

Provided that worked, it is now time to enable encryption of the data we will be storing on OneDrive. Start the config again.

rclone config

Select “New Remote” and give this a different name, in my case “encrypted”, then choose the number corresponding to Encrypt/Decrypt. You will then need to decide on a path to where the encrypted data will reside. I chose “onedrive:encrypted” so that it will end up in a folder named “encrypted” on my OneDrive. I then selected to “Encrypt filenames” and “Encrypt directory names”. Then I provide my own password, since this Raspberry Pi will surely not last forever. I won’t be remembering salt, so I opted to leave it blank. Choose “No” to advanced config and “Yes” to finish the setup.

With that in place I will create a script that performs the backup, placed in the folder that I want to backup. I am going to run this manually and only when I’ve been editing any of the files I need to backup.

nano backup.sh

This file will have the following contents.

!/bin/sh
/usr/bin/rclone --links --filter "- node_modules/**" sync . encrypted:

It will filter out the nodejs modules, since they can and will be redownloaded when you run node anyway. After testing this script I can see something like the following on my OneDrive in the encrypted folder.

Prerequisites for Node.js Development

Since I moved from a 32-bit to a 64-bit operating system, some npm modules may be built for the wrong architecture. I will clean out and refresh all module dependencies using the following. There are lots of modules in my system, since it actually does more than just run a static web server, like being the foundation for Rita (our robotic intelligent telephone agent). Some modules may need to be built, which is why we need to add the necessary software to do so.

rm -Rf node_modules
apk add --virtual build-dependencies build-base gcc wget git
npm install
npm audit fix

For better editing of actual code (than nano) I will be using kakoune.

apk add kakoune@testing

Now, if you will be running this from Windows I highly recommend using a terminal with true color capabilities, such as Alacritty. Colors will otherwise not look as nice as in the screenshot below (using the zenburn colorscheme).

I believe that is all, and this server has everything it needs now. Those paying particular attention to the code in the screenshot will notice that the underlying SQLite database is Anchor modeled.

I am writing this guides mostly for my own benefit as something to lean on the next time one of my servers call it quits, but they could very well prove useful for someone else in the same situation.

A Lack of Context

What I wish source systems would tell us and they hardly ever do. Best laid out as an example, look at this data:

𝟺𝟻𝟽𝟾𝟸𝟷, 𝟹 𝟶𝟶𝟶, 𝟸𝟶𝟸𝟶-𝟶𝟿-𝟸𝟶

This alone does not tell us much, so along with this we need context, commonly in the form of column names:

𝙲𝚄𝚂𝚃𝙾𝙼𝙴𝚁 𝙽𝚄𝙼𝙱𝙴𝚁, 𝙱𝙰𝙻𝙰𝙽𝙲𝙴, 𝚃𝙸𝙼𝙴𝚂𝚃𝙰𝙼𝙿

Fine, this is usually all we get. Now, let’s shake things up a bit by introducing a second line of data. Now we have:

𝟺𝟻𝟽𝟾𝟸𝟷, 𝟷𝟼 𝟶𝟶𝟶, 𝟸𝟶𝟸𝟶-𝟶𝟿-𝟸𝟶
𝟺𝟻𝟽𝟾𝟸𝟷, 𝟹 𝟶𝟶𝟶, 𝟸𝟶𝟸𝟶-𝟶𝟿-𝟸𝟶

Confusing, but this happens. Is the timestamp not granular enough and these were actually in succession? Is one a correction of the other? Can customers have different accounts and we are missing the account number?

Even if you can get all that sorted out, we can shake it up further. Put this in a different context:

𝙿𝙰𝚃𝙸𝙴𝙽𝚃 𝙽𝚄𝙼𝙱𝙴𝚁, 𝚁𝙰𝙳𝙸𝙰𝚃𝙸𝙾𝙽 𝙳𝙾𝚂𝙴, 𝚃𝙸𝙼𝙴𝚂𝚃𝙰𝙼𝙿

Now I feel the need to know more. Are these measurements made by different persons and how certain are they? What is the margin of error? If these were in succession, what were their durations? If only one of them is correct, which one is it?

More sources should communicate data as if it was a matter of life and death. This is what Transitional modeling is all about.

Tinker, Tailor, Raspberry Pi

I went ahead and got myself a Raspberry Pi 4B with 4GB RAM, which I intend to use as a job scheduling server, only to find out that the suggested OS, Raspberry Pi OS, is 32-bit. Fortunately, the Linux distro Alpine, which I’ve grown very fond of lately, is available for Raspberry Pi as aarch64, meaning it’s both 64-bit kernel and userland. Unfortunately the distro is currently, as of version 3.12, not set up for persistent storage and is more of a live playground. Gathering bits and pieces from various guides online, this can however be remedied with some tinkering. In this article you will find how to set up a persistent 64-bit OS on the Raspberry Pi, share a USB attached disk, while also adding some interesting software.

If you go ahead and buy the Pi 4, note that it has micro-HDMI ports. I thought they were mini, for which I already had cabling, but alas, another adapter had to be purchased. Also, when attaching a USB disk it is better if it is externally powered. The Pi can however power newer external SSD drives that have low power consumption. I tried with a magnetic disk based one powered over USB first, but it behaved somewhat strangely. With that said, let’s go ahead and look at how to get yourself a shiny tiny new server.

Tinkering for Persistence

After downloading the v3.12 tarball from Alpine on my macOS, it’s time to set up the SDHC card for the Pi. I actually borrowed my old hand-me-down MacBook Air that I gave to my daughter a few years ago, since it has a built-in card reader, as opposed to my newer Air. The Pi boots off a FAT32 partition, but we want the system to reside in an ext4 partition later, so we will start by reserving a small portion of the card for the boot partition. This is done using Terminal in macOS with the following commands.

diskutil list
diskutil partitionDisk /dev/disk2 MBR "FAT32" ALP 256MB "Free Space" SYS R
sudo fdisk -e /dev/disk2
> f 1
> w
> exit

The tarball should have decompressed once it hit your download folder. If not, use the option “xvzf” for tar.

cd /Volumes/ALP
tar xvf ~/Downloads/alpine-rpi-3.12.0-aarch64.tar
nano usercfg.txt

The newly created file usercfg.txt should contain the following:

enable_uart=1
gpu_mem=32
disable_overscan=1

The least amount of memory for headless is 32MB. The UART thing is beyond me, but seems to be a recommended setting. Removing overscan gives you more screen estate. If you intend to use this as a desktop computer rather than a headless server you probably want to allot more memory to the GPU and enable sound. Full specification for options can be found on the official Raspberry Pi homepage.

After that we just need to make sure the card is not busy, so we change to a safe directory and thereafter eject the card (making sure that any pending writes are finalized).

cd
diskutil eject /dev/disk2

Put the SDHC card in the Pi and boot. Login with “root” as username and no password. This presumes that you have connected everything else, such as a keyboard and monitor.

setup-alpine

During setup, select your keymap, hostname, etc, as desired. However, when asked where to store configs, type “none”, and the same for the apk cache directory. If you want to follow this guide to the point, you should also select “chrony” as the NTP client. The most important part here though is to get your network up and running. A full description of the setup programs can be found on the Alpine homepage.

apk update
apk upgrade
apk add cfdisk
cfdisk /dev/mmcblk0

In cfdisk, select “Free space” and the option “New”. It will suggest using the entire available space, so just press enter, then select the option “primary”, followed by “Write”. Type “yes” to write the partition table to disk, then select “Quit”.

apk add e2fsprogs
mkfs.ext4 /dev/mmcblk0p2
mount /dev/mmcblk0p2 /mnt
setup-disk -m sys /mnt
mount -o remount,rw /media/mmcblk0p1

Ignore the warnings about extlinux. This and the following trick was found in the Alpine Wiki, but in some confusing order. 

rm -f /media/mmcblk0p1/boot/*
cd /mnt
rm boot/boot
mv boot/* /media/mmcblk0p1/boot/
rm -Rf boot
mkdir media/mmcblk0p1
ln -s media/mmcblk0p1/boot boot

Now the mountpoints need fixing, so run:

apk add nano
nano etc/fstab

If you prefer some other editor (since people tend to become religious about these things) then feel free to use whatever makes you feel better than nano. Add the following line:

/dev/mmcblk0p1   /media/mmcblk0p1   vfat   defaults   0 0

Now the kernel needs to know where the root filesystem is.

nano /media/mmcblk0p1/cmdline.txt

Append the following at the end of the one and only line in the file:

root=/dev/mmcblk0p2

After exiting nano, it’s safe to reboot, so:

reboot

After rebooting, login using “root” as username, and the password you selected during setup-alpine earlier. Now you have a persistent system and everything that is done will stick, as opposed to how the original distro was configured.

Tailoring for Remote Access

OpenSSH should already be installed, but it will not allow remote root login. We will initially relax this constraint. Last in this article is a section on hardening where we again disallow root login. If you intend to have this box accessible from the Internet I strongly advice on hardening the Pi.

nano /etc/ssh/sshd_config

Uncomment and change the line (about 30 lines down) with PermitRootLogin to:

PermitRootLogin yes

Then restart the service:

rc-service sshd restart

Now you should be able to ssh to your Pi. The following steps are easier when you can cut and paste things into a terminal window. Feeling lucky? Then now is a good time to disconnect your keyboard and monitor.

Keeping the Time

If you selected chrony as your NTP client it may take a long time for it to actually correct the clock. Since the Pi does not have a hardware clock, it’s necessary to have time corrected at boot time, so we will change the configuration such that the clock is set if it is more than 60 seconds off during the first 10 lookups. 

nano /etc/chrony/chrony.conf

Add the following line at the bottom of the file.

makestep 60 10

Check the date, restart the service, and check the (now hopefully corrected) date again.

date
rc-service chronyd restart
date

Having the correct time is a good thing, particularly when building a job scheduling server.

Silencing the Fan

Together with the Pi I also bought a fan, the Pimoroni Fan Shim. According to reviews it is one of the better ways to cool your Pi, but it’s still too soon for me to have an opinion. Unless controller software is installed, it will always run at full speed. It’s not noisy, but still noticeable sitting a metric meter from the Pi. Again, some tinkering will be needed since the controller software needs some prerequisites installed. We lost nano between reboots, so we will go ahead and add it again.

apk update
apk upgrade
apk add nano

Other software we need is in the “community” repositories of Alpine. In order to active that repository we need to edit a file:

nano /etc/apk/repositories

Uncomment the second line (ending in v3.12/community), exit, then install the necessary packages.

apk update
apk add git bash python3 python3-dev py3-pip py3-wheel build-base

After those prerequisites are in place, install the fan shim software using:

git clone https://github.com/pimoroni/fanshim-python
cd fanshim-python
./install.sh

apk add py3-psutil
cd examples
./install-service.sh

The last script will fail with “systemctl: command not found”, since Alpine uses OpenRC as its init system, and not systemd which this script presumes. We will instead write our own startup script:

nano /etc/init.d/fanshim

This new file should have the following contents:

#!/sbin/openrc-run

name="fanshim"
command="/usr/bin/python3 /root/fanshim-python/examples/automatic.py"
command_args="--on-threshold 65 --off-threshold 55 --delay 2"
pidfile="/var/run/$SVCNAME.pid"
command_background="yes"

There are a lot of interesting options for fanshim that you can explore, like tuning it’s RGB led. Now we want this to run at boot time, so add it the the default runlevel, then start it.

rc-update add fanshim default
rc-service fanshim start

Enjoy the silence!

Adding and Sharing a Disk

Some of files we will be transferring are going to be quite large. It would also be neat to be able to access files easily from the Finder in macOS, so I am adding a USB3 connected hard disk with 4TB storage. What follows will be very similar to setting up a NAS, and in fact, the way I fell in love with Alpine was by building my own NAS from scratch (with the minor differences being more disks and using zfs). 

First we need to change the filesystem. The disk comes formatted as FAT32, which is very poorly suited for a networked disk. Samba, which is what we will be using for sharing, more or less requires a filesystem that supports extended attributes. After plugging in the drive, we will therefore repartition the drive and format it to ext4. 

cfdisk /dev/sda

Using cfdisk, delete any existing partitions and create one new partition. It should become “Linux filesystem” by default. Don’t forget to “Write” before “Quit”. Then format it:

mkfs.ext4 /dev/sda1

Now we need to add autofs to get automatic mounting. This package is in edge/testing though, so we need to enable that branch and repository, but still have main and community take preference. This can be done by labelling a repository.

nano /etc/apk/repositories

Change the line with the testing repository (last line in my file) to the following. Note that yours will have some server.from.setup/path depending on what you selected in setup-alpine. You only uncomment and add the @testing label in other words.

@testing http://<server.from.setup/path>/edge/testing

Now autofs can be installed from the labelled repo.

apk add autofs@testing

Note that dependencies are still pulled from main/community to the extent it is possible. In order to configure autofs, first:

nano /etc/autofs/auto.master

Add the following line after the uncommented line starting with /misc. It will also disconnect the hard disk after 5 minutes to save energy:

/-   /etc/autofs/auto.hdd   --timeout=300

Then create this new config file:

nano /etc/autofs/auto.hdd

Add the the following line to the empty file.

/hdd   -fstype=ext4   :/dev/sda1

Now, the user pi needs to be created.

adduser pi
smbpasswd -a pi

Select desirable passwords for the pi user. The latter one will later be stored in the macOS keychain and therefore easy to forget, so make note of it somewhere. 

Add autofs to startup and start it now. Change the ownership of /hdd to pi.

rc-update add autofs default
rc-service autofs start
chown -R pi.pi /hdd

With that in place (disk can be accessed through /hdd) it is time to set up the sharing. For this we will use samba and avahi for network discovery.

apk add samba avahi dbus
nano /etc/samba/smb.cfg

Now, this is what my entire smb.cfg file looks like, with all the tweaks to get stuff running well from macOS.

[global]

  create mask = 0664
  directory mask = 0775
  veto files = /.DS_Store/lost+found/
  delete veto files = true
  nt acl support = no
  inherit acls = yes
  ea support = yes
  security = user
  passdb backend = tdbsam
  map to guest = Bad User
  vfs objects = catia fruit streams_xattr recycle
  acl_xattr:ignore system acls = yes
  recycle:repository = .recycle
  recycle:keeptree = yes
  recycle:versions = yes
  fruit:aapl = yes
  fruit:metadata = stream
  fruit:model = MacSamba
  fruit:veto_appledouble = yes
  fruit:posix_rename = yes 
  fruit:zero_file_id = yes
  fruit:wipe_intentionally_left_blank_rfork = yes 
  fruit:delete_empty_adfiles = yes 
  server max protocol = SMB3
  server min protocol = SMB2
  workgroup = WORKGROUP    
  server string = NAS      
  server role = standalone server
  dns proxy = no

[Harddisk]
  comment = Raspberry Pi Removable Harddisk                     
  path = /hdd    
  browseable = yes          
  writable = yes            
  spotlight = yes           
  valid users = pi       
  fruit:resource = xattr 
  fruit:time machine = yes
  fruit:advertise_fullsync = true

Those last two lines can be removed if you are not interested in using the disk as a Time Machine backup for your Apple devices. I will likely not use it, but since this is how I configured my NAS and it was a hassle to figure out how to get it working I thought I’d leave it here for reference. Doesn’t hurt to keep it there in any way.

Let us also configure the avahi-daemon, by creating a config file for the samba service. Avahi will announce the server using Bonjour, making them easily recognizable from macOS (where they automagically show up in the Finder). 

nano /etc/avahi/services/samba.service

This new file should have the following contents:

<?xml version="1.0" standalone='no'?>
<!DOCTYPE service-group SYSTEM "avahi-service.dtd">
<service-group>
<name replace-wildcards="yes">%h</name>
<service>
<type>_smb._tcp</type>
<port>445</port>
</service>
<service>
<type>_device-info._tcp</type>
<port>0</port>
<txt-record>model=RackMac</txt-record>
</service>
<service>
<type>_adisk._tcp</type>
<txt-record>sys=waMa=0,adVF=0x100</txt-record>
<txt-record>dk0=adVN=HDD,adVF=0x82</txt-record>
</service>
</service-group>

Not that the txt-record containing adVN=HDD can be removed if you are not interested in using the disk as a Time Machine backup. Still, leaving it won’t hurt.

Finally, it’s time to add samba and avahi to the startup and start the services.

rc-update add samba default
rc-update add avahi-daemon default
rc-service samba start
rc-service avahi-daemon start

The disk should now be visible from macOS. Remember to click “Connect as…” and enter “pi” as the username and your selected smbpasswd from earlier. Check the box “Remember this password in my keychain” for quicker access next time. Sometimes, due to a bug in Catalina, you may get “The original item cannot be found” when accessing the remote disk. If that happens, force quit Finder, and you should be good to go again. If anyone knows of any other fix to this issue, let me know!

Automation

Now, this server will be used as a job server. Some of the jobs running will need the psql command from PostgreSQL and some others will be R jobs. Let’s install both, or whatever you need to satisfy your desires. The dev and headers are needed when R wants to compile packages from source code. You can skip this step for now if you are undecided about what to run or just need basic services like the built-in shell scripting. However, in order to run programs as different users within Cronicle, sudo is necessary.

apk add R R-doc postgresql
apk add R-dev postgresql-dev linux-headers libxml2-dev 
apk add sudo

In order to automate these jobs, we will be using Cronicle. It depends on node.js so we need to install the prerequisites. It’s run script is fetched using curl, so it will also need to be installed.

apk add nodejs npm curl

The installation is done as follows (it is a oneliner even if it looks broken here).

curl -s https://raw.githubusercontent.com/jhuckaby/Cronicle/master/bin/install.js | node

I want to use standard ports, so I need to change the config slightly.

nano /opt/cronicle/conf/config.json

Change base_app_url from port 3012 to 80. Much further down, change http_port from 3012 to 80, and https_port from 3013 to 443. If you want mails to be sent, change smtp_hostname in the beginning of the file to the mail relay you are using. After that an initialization script needs to be run.

/opt/cronicle/bin/control.sh setup

Now we just need to get it running at boot time. This is, however, a service that we do not want to “kill” using a PID, so we are going to enable local scripts that start and stop the service in a controlled manner instead.

rc-update add local default
nano /etc/local.d/cronicle.start

This new file should have the following line in it:

/opt/cronicle/bin/control.sh start

Now we need to create a stop file as well:

nano /etc/local.d/cronicle.stop

This file should have the contents:

/opt/cronicle/bin/control.sh stop

In order for the local script daemon to run these, they need to be executable.

chmod +x /etc/local.d/cronicle.*

With that, let’s secure things.

Hardening

Now that most configuring is done, it’s time to harden the Pi. First we will install a firewall with some basic login protection using the builtin ‘limit’ in iptables. Assuming you are in the 192.168.1.0/24 range, which was set during setup-alpine, the following should be run. Only clients on the local network are allowed access to shared folders.

apk add ufw@testing
rc-update add ufw default
ufw allow 22
ufw limit 22/tcp
ufw allow 80
ufw allow 443
ufw allow from 192.168.1.0/24 to any app CIFS
ufw allow Bonjour

With the rules in place, it’s time to disallow root login over ssh, and make sure that only fresh protocols are used.

nano /etc/ssh/sshd_config

Change the line that previously said yes to no, and add the other lines at the bottom of the file (borrowed from this security site):

PermitRootLogin no

PrintMotd no
Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,umac-128@openssh.com

After that, enable ufw and restart sshd. Note that if something goes wrong here you will need to plug in a monitor and keyboard again to login locally and fix things.

ufw enable
rc-service sshd restart

Now is a good time to reboot and reconnect to check that everything is working.

reboot

With root not being able to login, you will instead login as “pi”. It is possible for this user to (temporarily, until exit) elevate privileges by the following command:

su

Another option is to use sudo, but I will leave it like this for now, and go ahead with setting up some jobs. That’s a story for another article though.

I hope this guide has been of help. It should be of use for anyone tinkering with Alpine on their Raspberries, and likely some parts for those running other Linux flavors on different hardware as well.

She’ll wear a grue dress

This is a continuation of the articles “She wore a blue dress” and “Rescuing the Excluded Middle“, which introduced crisp imprecision and fuzzy uncertainty. The former being evaluative and the latter both subjective and contextual. The articles discuss, relate, and sometimes further the formalization of transitional modeling, so they are best read with some previous knowledge of this technique. An introduction can be found starting with the article “What needs to be agreed upon” or by reading the scientific paper “Modeling Conflicting, Unreliable, and Varying Information“. In this article I will discuss the effect of a chosen language upon the modeling of posits, with particular homage to the new riddle of induction and Goodman’s predicate ‘grue’.

In order to look at the intricacies of using language to convey information about the real world, we will focus on the statement “She’ll wear a grue dress”. First, this refers to a future event, as opposed to the previously investigated statement “She wore a blue dress”, which obviously happened in the past. There are no issues talking about future events in transitional modeling. Let’s say Donna is holding the dress and is just about to put it on. She would then, with absolute certainty, assert the posit “She’ll wear a grue dress”. It may be the case that the longer time before the dress will be put on, the less certain Donna will be, but not necessarily. If she just after New Year’s Eve is thinking of what to wear at the next, she could still be certain. Donna could have made it a tradition to always wear the same dress.

There is a difference between certainty and probability. If Donna is certain she will wear that dress at the next New Year’s Eve, she is saying her decision has already been made to wear it, should nothing prevent her from doing so. From a probabilistic viewpoint, lots of things can happen between now and New Year preventing that from ever happening. The probability that she will wear the dress at next New Year’s Eve is therefore always less than 1, and will be so for any prediction. Assuming the probability could be determined, it would also be objective. Everyone should be able to come up with the same number. Bella, on the other hand, could be certain that Donna will not wear the dress at the next New Year’s Eve, since she intends to ruin Donna’s moment by destroying the dress. Certainty is subjective and circumstantial. I believe this distinction between certainty and probability is widely overlooked and the concepts confused. “Are you certain? Yes. Is it probable? No” is a completely valid and non-contradictory situation.

With no problems of talking about future events, let’s turn our attention to ‘grue’. Make note of the fact that you would not have reacted in the same way if the statement had been “She’ll wear a blue dress”, unless you happen to be among the minority already familiar with the color grue. If you belong to that minority, having studied philosophy perhaps, then forget for a minute what you know about grue. I will look at the word ‘grue’ from a number of different possibilities, of only the last will be Goodman’s grue.

What is grue?

  1. It is a color universally and objectively distinguishable from blue.
  2. It is a color selectively and subjectively indistinguishable from blue.
  3. It is a synonym of blue.
  4. It is an at the current time widely known color.
  5. It is an at the current time little known color.
  6. It is an at the current time unknown color that will become known.
  7. It is an at the current time known color synonymous with blue that at some point in the future will be considered different from blue (Goodman).

In (1) there will likely be no issues whatsoever. Perhaps there is a scientific definition of ‘grue’ as a range of wavelengths in between green and blue. On a side note and right now, the color greige is quite popular and a mix between grey and beige. Using that definition of ‘grue’ anyone should be able to reach the same conclusion whether an actual color can be said to be grue or not. Of course most of us do not possess spectrophotometers or colorimeters, so we will judge the similarity on sight. If enough reach the same conclusion, we may say it’s as close to an objectively determinable color as we will get. This is good, and not much thought has to go into using >grue< in a posit.

In (2) there may be potential issues. Perhaps grue and blue become indistinguishable under certain conditions, such as lighting, or let’s assume that 50% of the population cannot distinguish between grue and blue because of color blindness. Given two otherwise identical dresses of actual different colors, grue and blue, they may assert that she wore or will wear both of these, simultaneously. Such assertions can be made in transitional modeling and possible contradictions found using a formula over sums of certainty (see the scientific paper). To resolve this, non-contradiction either needs to be enforced at write time or periodically analyzed. Unknown types of color blindness could even be discovered this way, through statistically significant contradictory opinions. That being said, one should document already known facts and new findings with respect to effects that may disturb the objectivity of the values used.

In (3) there is a choice or a need for documentation. Either one of ‘blue’ and ‘grue’ is chosen and used consistently as the value or both are used but the fact that they are synonymous is documented. This may be a more common situation than one first may think, since ‘grue’ could be the word for ‘blue’ in a different language. This then raises the question of synonymy. What if there are language-specific differences between the interpretations of ‘grue’ and ‘blue’, so that they nearly but not entirely overlap? If grue allows a bit more bluegreenish tones than blue then they are only close to synonymous. This speaks for keeping values as they were stated, but that values themselves then may need their own model.

With those out of the way, let us look at how well known of a color grue is. In (4) almost everyone has heard of and use grue when describing that color. This is good, both those who are about to assert a posit containing >grue< will know how to evaluate it, and those later consuming information stored in posits will understand what grue is. With (5) difficulties may arise. In the extreme, I have invented the word ‘grue’ myself and nobody else knows about it. However, when interrogated by the police to describe the dress of the woman I saw at the scene of the crime, I insist on it being grue. No other color comes close to the one I actually saw. Rare values, like these, that likely can be explained in more common terms need translation. If done prescriptively the original statement is lost, but if not, it must be done descriptively at the cost of the one consuming posits first digesting translation logic. This is a very common scenario when reading information from some system, in which you almost inevitably find their own coding schemes, like “CR”, “LF”, “TX”, and “RX” turning out to have elaborate meanings.

Now (6) may at first glance seem impossible, but it is not. Let us assume that we believe the dress is blue and the posit temporally more qualified to “She’ll wear a blue dress on the evening of December 31st 2020”. Donna asserts this with 100% certainty the day after the preceding New Year’s Eve. When looking at the dress on December 31st 2020, Donna has learnt that there is a new color named grue, and there is nothing more fitting to describe this dress. Given this new knowledge, that the dress is and always has been grue, she retracts her previous posit, produce a new posit, and asserts this new one instead. The process can be schematically described as:

posit_1     = She'll wear a blue dress on the evening of December 31st 2020

assertion_1 = Donna, posit_1, 100% certainty, sometime on January 1st 2020

assertion_2 = Donna, posit_1, 0% certainty, earlier on December 31st 2020

posit_2     = She'll wear a grue dress on the evening of December 31st 2020

assertion_3 = Donna, posit_2, 100% certainty, earlier on December 31st 2020

Given new knowledge, you may need to correct yourself. This is precisely how corrections are managed in transitional modeling, in a bi-temporal solution, where it is possible to deduce who knew what when. This works for rewriting history as well:

posit_3     = The dress is blue since it was made on August 20th 2018

assertion_4 = Donna, posit_3, 100% certainty, sometime on August 20th 2018

assertion_5 = Donna, posit_3, 0% certainty, earlier on December 31st 2020

posit_4     = The dress is grue since it was made on August 20th 2018

assertion_6 = Donna, posit_4, 100% certainty, earlier on December 31st 2020

The dress is and always has been grue, even if grue was unheard of as a color in 2018. Nowhere do the posits and assertions indicate when grue started to be used though. This would, again be a documentation detail or alternatively warrant explicit modeling of values.

Finally there is (7), in which there is a point in time, t, before which we believe everything blue to be grue and vice versa. Due to some new knowledge, say some yet to be discovered quantum property of light, those things are now split into either blue or grue to some proportions. This is really troublesome. If some asserters were certain “She wore a blue dress” and others were certain “She wore a grue dress”, in assertions made before t, that was not a problem. They were all correct. After that point in time, though, there is no way of knowing if the dress was actually blue or grue from those assertions alone. If we are lucky enough to get hold of the dress and figure out it is blue, things start to look up a bit. We would know which asserters were wrong. Their assertions could be invalidated, while we make new ones in their place. In the less fortunate event that the dress is nowhere to be found, previous assertions could perhaps be downgraded to certainties in accordance with the discovered proportions of blue versus grue.

The overarching issue here, which Goodman eloquently points out, is that this really messes up our ability to infer conclusions from inductive reasoning. How do we know if we are in a blue-is-grue situation soon to become a blue-versus-grue nightmare? To me, the problem seems to be a linguistic one. If blue and grue have been used arbitrarily before t, but after t signify a meaningful difference between measurable properties, then reusing blue and grue is a poor choice. If, on the other hand, blue and grue were actually onto something all along, then this measurable property must have been present and in some way sensed, and many assertions likely to be valid nevertheless. This reasoning is along the lines of philosopher Mark Sainsbury, who stated that:

A generalization that all A’s are B’s is confirmed by instances unless we have good reason to believe that there is some property, O, such that every A-instance is O, and if those A-instances had not been O, they would not have been B.

In other words, some additional property is always hiding behind issue number (7).

With all that said, there are a lot of subtleties concerning values, but most, if not all of them can be sorted out using posits and assertions, with the optional addition of an explicit model of values, together with prescriptive or descriptive measures. That being said, if language is used with proper care and with the seven types of ‘grue’ mentioned above in mind, you will likely save yourself a lot of headaches. We also learnt that people normally think in certainties rather than probabilities.

Rescuing the Excluded Middle

This is a continuation of “She wore a blue dress“, in which we introduced to the concepts of imprecision and uncertainty. I will now turn the focus back on the imprecise value ‘blue’ and make that imprecision a bit more formal. In the works of Brouwer related to intuitionism an imprecise value can be thought of as a mapping. I will introduce the notation >blue< for such a mapping of the imprecise value ‘blue’. The mapping >blue< would then be:

>blue< : x ⟶ [0,1]

In other words, for any color x it evaluates to either 1 for it being fully considered as blue or 0 if it cannot be considered blue. However, according to Brouwer any value in between is also allowed. It could be 0.5 for half blue, which is also known as a fuzzy impecise value. Allowing these will confuse the with imprecision codependent concept of uncertainty. I will therefore restrict imprecise values, such as blue to:

>blue< : x ⟶ {true, false}

The reasoning is that subjectivity enters already in the evaluation of this mapping. In the terminology of transitional modeling, it is when asserting the statement “She wore a blue dress” that the asserter evaluates the actual color of the dress against the value ‘blue’. As such, the posit will be crisp from the asserter’s point of view. Given that the dress was acceptably ‘blue’ enough, the asserter can determine their certainty towards the posit. Values can therefore be said to be crisp imprecise values, but only relative a subject.

If we assume that the occasion when she wore a dress took place on the 1st of April 2020 and this is used as the appearance time in the posit, then it is also an imprecise value. Most of us will take this as the precise interval from midnight to midnight on the following day. At some point in that crisp interval, the dress was put on. Even so, putting on a dress is not an instantaneous event and time cannot be measured with infinite precision, so regardless of how precisely that time is presented, appearance time will remain imprecise.

With finer detail, the appearance time could, for example have been expressed as at two minutes to midnight on the 1st of April 2020. But, here we start to see the fallacy of taking some time range for granted though. With the same reasoning as before we would assume that to refer to the interval between two minutes and one minute to midnight. However, there is no way of knowing that a subject will always interpret it this way. So, we need the mapping once again:

>two minutes to midnight on the 1st of April 2020< : x ⟶ {true, false}

It seems as if the evaluation of this mapping is not only subjective, but also contextual. If we know that it could have taken more than a minute to put on the dress in question, then maybe this allows for both tree and one minute to midnight evaluating to true. Even when such a range is possible to specify it is almost never available in the information we consume, so we often have to deal with evaluations like these. We have, however, become so used to evaluating the imprecision that we do so more or less subconsciously.

But, didn’t we lose a whole field of applicability in the restriction of Brouwer’s mapping? That fuzziness is actually not all lost. I believe that what assertions do in transitional modeling is to fill that gap, while paying respect to subjectivity and contextuality. It is not possible to capture the exact reasoning behind the assertion, but we can at least capture its result. Recall that an assertion is someone expressing a degree of certainty towards a posit, here exemplified by “She wore a blue dress”. An example of an assertion is: “Archie thinks it likely that she wore a blue dress”. With time involved this becomes: “On the 2nd of April Archie thinks it likely that she wore a blue dress two minutes to midnight on the 1st of April”. Even more precisely and closer to a formal assertion: “Since the >2nd of April< the value >likely< appears for (Archie, certainty) in relation to ‘since the >1st of April< the value >blue< appears for (she, dress color)'”.

As can be seen, assertions can themselves be formulated as posits. Given the example assertion, it’s value is also imprecise, with a mapping:

>likely< : x ⟶ {true, false}

We have however, in transitional modeling, decided that certainty is better expressed using a numerical value. Certainty is taken from the range [-1, 1], with 1 being 100% certain, -1 being 100% certain of the opposite, and 0 for complete uncertainty. Certainties in between represent beliefs to some degree. We have to ask Archie, when you say ‘likely’, how certain is that given as a percentage? Let’s assume it is 80%. That means the corresponding mapping becomes:

>0.8< : x ⟶ {true, false}

Certainty is just another crisp imprecise value, but relative a subject who has performed a contextual evaluation of the imprecise values present in a posit with the purpose of judging their certainty towards it. An asserter (the subject) made an assertion (the evaluation and judgement), in transitional modeling terminology.

The interesting aspect of crisp imprecise values are that they respect “tertium non datur”, which is Latin for “no third is given”, more commonly known as the law of the excluded middle. In propositional logic it can be written as (P ∨ ¬P), basically saying that no statement can be both true and not true. An asserter making an assertion, evaluating whether the actual color of the dress can be said to be blue, obeys this law. It can either be said to be blue or it cannot. This law does not hold for fuzzy imprecise values. If something can be half blue, then neither “the dress was blue” nor “the dress was not blue” is fully true.

Fuzziness is not lost in transitional modeling though. Since certainty is expressed in the interval [-1, 1], it encompasses that of fuzzy values. The difference is that fuzziness comes from uncertainty and not from imprecision. Uncertainty is subjective and contextual, whereas fuzzy imprecise values are assumed objective and universal. I believe that this makes for a richer and truer to life, albeit more complex, foundation. It also rescues the excluded middle. Statements are either true or false with respect to crispness, but it is possible to express subjective doubt. Thanks to the subjectivity of doubt, contradicting opinions can be expressed, but that is the story of my previous articles, starting with “What needs to be agreed upon“.

As a consequence of the reasoning above, a posit is open for evaluation with respect to its imprecisions. Such imprecisions are evaluated in the act of performing an assertion, but an assertion is also a posit. In other words, the assertion is open for evaluation with respect to its imprecisions (the >certainty< and >since when< this certainty was stated). This can be remedied by someone asserting the assertion, but then those assertions will remain open, so someone has to assert the new assertions asserting the first assertions. But then those remain open, so someone has to assert the third level assertions asserting the second level assertions asserting the first level assertions, and so on…

Rather than having “turtles all the way down“, in transitional modeling there are posits all the way down, but for practical purposes it’s likely impossible to capture more than a few levels. The law of the excluded middle holds, within a posit and even if imprecise, but only in the light of subjective asserters performing contextual evaluations resulting in their judgments of certainty. To some extent, the excluded middle has been rescued!

Identification, identity, and key

Since we have started to recognize “keys” in our information modeling tool (from version 0.99.4) I will have this timely discussion on identification and identity. Looking at my previously published articles and papers, I have repeatedly stated that identification is a search process by which circumstances are matched against available data, ending in one of two outcomes: an identity is established or not. What these circumstances are and which available data you have may vary wildly, even if the intent of the search is the same. Think of a detective who needs to find the perpetrator of a crime. There may have been strange blotches of a blue substance at the crime scene, but no available register to match blue blotches of unknown origin to. We have circumstances but little available data, yet often detectives put someone behind bars nevertheless.

On the other hand, think of a data integrator working with a data warehouse. The circumstance is a customer number and you have a neat and tidy Customer concept with all available data in your data warehouse. The difference to the detective is the closeness of agreement between different runs of the identification process. The process will look very much the same for the next customer number, and the next, and the next. So much so that the circumstance itself may warrant its own classification, namely being a “key” circumstance. In other words, a “key” is when circumstances exist that every time produce an identical search process against well defined and readily available data. As such, a “key” does not in any way imply that it is the only way to identify something, that it is independent of which time frame you are looking at it, or that it cannot be replaced at some point.

These are the reasons why, in Anchor and Transitional modeling, no importance has been given to keys. Keys cannot affect a model, because if they did, the model itself would reflect a single point of view, be bound to a time frame, and run the risk of becoming obsolete. That being said, if a process is close to perfectly reproducible, it would be stupid not to take advantage of that fact and help automate it. This is where the concept of a “key” is useful, even in Anchor and Transitional modeling, which is why we are now adding it as an informational visualization with the intent of also creating some convenient functionality surrounding them. Even so, regardless of which keys you add to the model, the model is always unaffected by these, precisely for the reasons discussed above.

I hope this clarifies my stance on keys. They are convenient for automation purposes, since they help the identification process, but shall never affect the model upon which they work in any way.

Visualization of Keys

Visualization and editing of keys has been added in version 0.99.4 (test) of the free online Anchor modeling tool. This is so far only for informational purposes, but is of great help when creating your own automation scripts. Note that a key in an Anchor model behaves like a bus route, stopping on certain items in the graph. In order to create a key, select an anchor and at least one attribute (shift-clicking lets you do multiple select). To edit a created key, click on its grey route to highlight it red. You can then add or remove items or change it’s name. Click again to leave key editing mode. Along with this come some improvements to the metadata views in the database, and among them the new _Key view.