Oatmeal

A digital pillow fort

Screenshot of Pocket Forth running on Mac OS System 7.

I’m toying with the idea of becoming a full time Mac OS System 7 developer.


Edited to add that I’ve had so much fun playing with this implementation of Forth on Mac OS System 7 that I quickly built a little microsite to help archive the info I’ve found about it.

My programming language odyssey

While I wouldn’t say I’m wicked adept at any one language, I’ve dipped my toes into many different languages. Here, I try to roughly recreate my programming language journey.

I can make websitez gud; HTML, CSS/SASS, JavaScript > CoffeeScript > TypeScript, and PHP

The web. A marvel, a terror. I started here, more out of ease of access than necessity, but was able to get far enough to make a career out of web dev. I should also add SQL to this list.

Elm is something I’d like to dip my toes into.

Want make thingz go brrrr; Common Lisp

I don’t honestly know how I first came to Common Lisp, I think through a blog post, or maybe a cute book. While I don’t use it much these days, I still carry a torch for it in my heart.

Want lovely tooling; SmallTalk

I sort of wish I’d never played with SmallTalk. It broke me. SmallTalk opened my eyes to a really integrated development environment.

Oh snap! Parenthesis are rad!; Clojure

Clojure remains my white whale. On paper it is the perfect language for me:

  • lots of parenthesis
  • good at web stuff
  • fine for game dev
  • friendly with emacs

But I’ve never felt cozy in it.

The JVM is hard, but scheme is rad!; a million flavors of scheme and scheme-like languages, (Chicken Gerbil s7 Racket Guile Chibi)

Parentheses baaaby! If I was forced to stick to a single language and never touch another, I’d probably pick a scheme…the question is then which scheme!?

Racket isn’t strictly a pure” scheme, but who cares and it can be…and has a bananas gigantic library.

Chibi is adorably tiny and the most fully featured R7RS scheme I’ve found.

Chicken has some great docs…and is called chicken,” I mean, come on!? That is lovely.

What about games though? I wanna make games!; Lua (especially using PICO-8 or Love2d)

I have in recent years become pretty jaded about the state of software and what most software is used for…but I love games, so, Lua is pretty rad for making games. Lua is also a really great teaching/learning language.

But I missing the parenthesis; Fennel

Yeah, but what if Lua was a lisp-like language?

I’ve found that many programming languages are made or broken by their community. Fennel has one of the friendliest, most supportive communities I’ve ever witnessed in a programming language.

This is neat, but what if I wanted weirder?; Janet

Janet would be another contender for a forever language — it is weird, sort of a Clojure clone, sort of a Lisp, but totally its own thing at the same time. It is tiny, portable, and fits into similar spaces that C does…but also not really. Janet is a beast utterly of its own…also the name of my grandma.

Hold up now! I said weirder!; BQN, APL, K

Alright, this was probably me going off the deep end…

Okay, too0ooo weird and my brain is goo; gforth, pforth, and lbforth

I adore languages that I can hold entirely in my head. A big thing that helps me hold a language in my head is limited semantics. You don’t get much more limited than Forth!

The ethos at the heart of Forth is clearly articulated by its inventor,

The concept that programming is something that you need special education to do is not right. It is something that is promoted by the priesthood.

— Chuck Moore

Hold those horses…I’m in love!; RetroForth

Readers of this blog will have seen me talk about Retro before…while it makes no sense as a forever language…here I am…I’m stricken…I’m totally lovesick for it. It is tiny, it is portable, it is well documented, it assumes literate programming as the norm!

That’s a mighty nice little vm you’ve got there; Uxntal

Like Forth, this is another system that strives to be pretty much completely understandable. A system that can be held in 1 person’s head…it also offers everything you need to make little graphical programs and games.

But what if assembly?; 6502 and z80 assembly

Again, this was me going off the deep end a little bit.

What if I wanted a job though?; C, C++, Go, Java, Kotlin, and Swift

Blergh — remember when I said that SmallTalk broke me? Yeah, that broken-ness really comes to rear its head when I try to use these gigantic enterprise languages that have terrible tooling (Go, C, and C++ are almost passable, but Kotlin and Swift are laughable).

I also once upon a time tried Rust but it literally melted a component on my laptop so I gave up.

Fuck it! Those are no fun! Go go gadget make your own programming language!; Guava

I mean…did I really make my own programming language? No. But, Guava does carry with it a lot of what I’ve liked about other languages along the way.


So, where next? What next? I’m a habitual breadth over depth kinda person. I wanna say it is time to go deep on one language…but…who knows!?

A red-tailed hawk with wings outstretched to keep its balance while perched on a peek branch of a tall pine tree.

An unexpected visitor came to the tree in our neighbor’s yard today.

In reply to: ~karlen, "no one will ever read this but..."

~dozens recently introduced me to this series where blog posts that are at least a year old and feature the phrase no one will ever read this but” are read allowed…and…it is remarkable.

Close up of seed pods

Signs of spring

A kiddo in winter gear looking at a birch poly pore mushroom

Birch poly pore.

Card game of hidden information and structured communication

This is a game of hidden information, structured communication and cooperation. To play you will need a standard deck of playing cards. The game is a modified version of Hanabi which is played with a custom deck of cards and tokens.

How to play

Remove aces and 2s from the deck. Place aces in a pile, face up. These are the information tokens, a form of in-game currency.

Place the 2s in a line at the center of the play space. All cards will be played on top of the 2s.

Shuffle the remaining cards. Deal each player (up-to 5 players) 5 cards, face down.

Once everyone has cards, lift cards. A player cannot ever look at their own hand! Players hold their cards so that all other players can see them, but so they cannot see their own cards.

Players now go around the circle, building ascending piles of cards by suit, starting from 2 working up.

During a turn a player can take 1 of 3 actions:

  1. Take an ace (information token) and use it to tell another player something about their hand
    • A player may only reveal information about suit or number. E.g., they can indicate that another player has 2 spades or may indicate that a player has 3 3s.
    • After revealing information the ace is discarded and is unusable until it is reactivated.
  2. Discard a card from their own hand, reactivating a previously discarded ace for another player to use.
  3. Play a card onto the play piles.

A player must always have 5 cards in their hand. After playing a card, or discarding a card, a player draws another card from the remainder of the deck.

Cards that have been discarded to reactive an ace cannot be played and are removed from the game space.

If the draw pile is empty, continue playing until either everyone runs out of cards, or no moves can be made. Play is over when no moves can be made.

The game has a soft win-state, where you determine by how much the group won by how many piles they successfully create, 2 - king.

A child runs out of frame. Behind their retreating form a fairy fort with pine bow flooring.

From where to what?

What is an addressing mode?

In a recent post I referenced addressing modes. But what the heck are they!?

Setting the stage

The instruction register holds the program instruction that is currently being run.

A fixed number of bits within the instruction register represent the operation, e.g. “op. code” — examples of these instructions include things like add, subtract, load, and store. We can imagine the instruction register like this:

ASCII diagram of an instruction register. Links to a txt of the same diagram.

There’s a fixed number of bits allocated to the op. code (the 6 left-most boxes), and then a fixed number of bits that hold the operand/s being operated on (remaining 10 boxes). An operand could be a value, a CPU register, or a memory address. This set of fixed bits is referred to as the address field.”

The number of bits allocated to the address field determines the amount of memory that can be addressed. The number of bits allocated to the op. codes determines how expressive the op. codes can be (or at least how many of them there can be).

Addressing modes provide different ways to use the addressable memory.

In my diagram, 2 bits of the operation code are used to determine the addressing mode. The addressing mode tells the processor how the bits in the address field should be interpreted.

For example…

  LDA #80
  LDA $80

These similar looking instructions are pretty different.

# tells us that the number following is a literal value.

$ tells us that the number following references a memory address.

So, LDA #80 loads the literal decimal value 80 into the A register and LDA $80 loads the value located at memory address $80 into the A register.

#80 is known as immediate mode because we are directly, or immediately, loading a value, while $80 is known as absolute, or zero page, mode.

What about a literal hex value?

BOOM!

  LDA #$80

This loads the literal hex value $80 (e.g. 128) into the A register.

Other resources

A young child sits in tall snow, looking at the shadow their mittened hand casts.

Notes on 6502 Assembly

The NES runs a very slightly modified 6502 processor. What follows are some very introductory, and not at all exhaustive notes on 6502 Assembly, or ASM.

If you find this at all interesting, Easy 6502 is a really great introductory primer on 6502 Assembly that lets you get your hands dirty right from a web browser.

Numbers

Numbers prefixed with one of the following:

  • $ are hexadecimal format
  • # are literal numbers

Any other number without either of these prefixes refers to a memory location.

So,

    LDA #$01

Loads the hex value $01 into register A.

Registers and flags

There are 3 primary registers,

  • A
  • X
  • Y

A is usually called the accumulator.

Each register holds a single byte

SP is the stack pointer, a register that is decremented every time a byte is pushed onto the stack and incremented whenever a byte is popped off the stack.

PC is the program counter. PC is how the processor keeps track of where in the currently running program it is.

Processor flags

Each flag is 1 bit, so all 7 flags can live in a single byte

More info on registers and flags.

Instructions

In 6502 Assembly instructions are like words in Forth, or functions in a higher order programming language. Every instruction takes 0 or 1 arguments.

An example of some instructions,

    LDA #$c0  ; Load the hex value $c0 into the A register
    TAX       ; Transfer the value in the A register to X
    INX       ; Increment the value in the X register
    ADC #$c4  ; Add the hex value $c4 to the A register
    BRK       ; Break - we're done

For a full list of 6502 ASM instructions see,

6502 ASM has a handful of branching instructions — they almost all rely on flags to determine what branch to follow.

Addressing modes

The 6502 has 65536 bytes of available memory. These bytes are typically described using the HEX range $0000 - $ffff.

When the 6502 refers to addressing modes, it really means What is the source of the data used in this instruction?”

The different modes are,

Absolute: $c000

With absolute addressing, the full memory location is used as the argument to the instruction.

Zero page: $c0

All instructions that support absolute addressing (with the exception of the jump instructions) also have the option to take a single-byte address. This type of addressing is called zero page” - only the first page (the first 256 bytes) of memory is accessible. This is faster, as only one byte needs to be looked up, and takes up less space in the assembled code as well.

Zero page,X: $c0,X

In this mode, a zero page address is given, and then the value of the X register is added.

Zero page,Y: $c0,Y

This is the equivalent of zero page,X, but can only be used with LDX and STX.

Absolute,X and absolute,Y: $c000,X and $c000,Y

These are the absolute addressing versions of zero page,X and zero page,Y.

Immediate: #$c0

Immediate addressing doesn’t strictly deal with memory addresses - this is the mode where actual values are used. For example, LDX #$01 loads the value $01 into the X register. This is very different to the zero page instruction LDX $01 that loads the value at memory location $01 into the X register.

Relative: $c0 (or label)

Relative addressing is used for branching instructions. These instructions take a single byte, which is used as an offset from the following instruction.

Implicit

Some instructions don’t deal with memory locations, for example, INX - increment the X register. These have implicit addressing because the argument is implied by the instruction.

Indirect: ($c000)

Indirect addressing uses an absolute address to look up another address. The first address gives the least significant byte of the address, and the following byte gives the most significant byte.

Indexed indirect: ($c0,X)

This one’s kinda weird. It’s like a cross between zero page,X and indirect. Basically, you take the zero page address, add the value of the X register to it, then use that to look up a two-byte address.

Indirect indexed: ($c0),Y

Indirect indexed is like indexed indirect, but instead of adding the X register to the address before de-referencing, the zero page address is de-referenced, and the Y register is added to the resulting address.

For more on the different modes of addressing,

The stack

The current depth of the stack is measured by the stack pointer, a special register. The stack lives in memory between $0100 and $01ff. The stack pointer is initially $ff, which points to memory location $01ff. When a byte is pushed onto the stack, the stack pointer becomes $fe, or memory location $01fe, and so on.

Jumping

Jumping is like branching with two main differences:

  • First, jumps are not conditionally executed
  • Second, they take a two-byte absolute address

For small programs, this second detail isn’t important, as you’ll be using labels, and the assembler works out the correct memory location from the label. For larger programs though, jumping is the only way to move from one section of the code to another.

Other Resources

Because these are but the barest of minimum notes, here are some more resources for continued reference.

Notes on Big O Notation

Imagine we have 2 implementations of the same function. How do we know which one is best? This is what Big O helps with! Big O notation is a little bit like the Richter scale for code performance.

Write a function that calculates the sum of all numbers from 1 up to (and including) n.

function addUpToV1(n) {
    let total = 0;
    for (let i = 1; i <= n; i++) {
        total += i;
    }
    return total;
}

function addUpToV2(n) {
    return n * (n + 1) / 2;
}

These both work! But which is better? And what does better mean?

Faster, less memory usage, and more readable

…alas, readability tends to be the lowest ranked in this hierarchy ಥ_ಥ

Lets determine which one is faster! We’ll use performance.now() to help determine this.

let t1 = performance.now();
addUpToV1(1000000000);
let t2 = performance.now();
console.log(`Time Elapsed: ${(t2 - t1) / 1000} seconds.`);

let t3 = performance.now();
addUpToV2(1000000000);
let t4 = performance.now();
console.log(`Time Elapsed: ${(t4 - t3) / 1000} seconds.`);

While this process works, it isn’t heaps reliable and is a bit hairy to talk about because time is problematic. Different computers will record different time — margins are pretty much guaranteed to be different across computers. Also, for stuff like this, time isn’t useful because they’re both wicked quick — the smallest interval of measurable time isn’t small enough!

ENTER BIG O!

console.log('big O!');

Rather than counting time, lets count the number of operations!

What is an operation?

addUpToV2 does 3 operations:

  1. Multiplication
  2. Addition
  3. Division

Regardless of whatever value is passed for n, it’ll always do 3 operations.

addUpToV1, now, is a different story! The number of operations is tied to the value of whatever value is passed for n!

Don’t be heaps concerned with the exact number of operations — what is important here is to understand the big picture. Big O Notation is a way to formalize fuzzy counting. With it, we can talk about how the runtime of an algorithm grows as the inputs grow. It is an expression of the relationship between the size of input and runtime.

Big O is all about the broad trends.

Legit definition:

An algorithm is O(f(n)) if the number of simple operations the computer has to do is eventually less than a constant f(n), as n increases

Big O helps answer the question when n grows, how does it grow?”

Some general guidance to follow:

  • Constants don’t matter! O(500) is simplified to O(1), O(13n^2) -> O(n^2)
  • Smaller terms don’t matter! O(n+10) -> O(n), O(n^2+5n+8) -> O(n^2)
  • Arithmetic operations are constant
  • Variable assignment is constant
  • Accessing elements in an array (by index) or object (by key) is constant
  • In a loop the complexity is determined by the length of the loop multiplied by the the complexity of whatever is happening inside of the loop
// O(n), as n grows our runtime grows
function logAtLeast5(n) {
    for (var i = 1; i <= Math.max(5, n); i++) {
        console.log(i);
    }
}

// O(1), as n grows runtime grows, but is capped at 5 
function logAtMost5(n) {
    for (var i = 1; i <= Math.min(5, n); i++) {
        console.log(i);
    }
}

let t5 = performance.now();
logAtLeast5(30);
let t6 = performance.now();
console.log(`Time Elapsed: ${(t6 - t5) / 1000} seconds.`);

let t7 = performance.now();
logAtMost5(30);
let t8 = performance.now();
console.log(`Time Elapsed: ${(t8 - t7) / 1000} seconds.`);

Chess Rules

I love when folks say stuff like there are only a finite number of states a chessboard can occupy, therefore a computer can play chess.”

To the folks who say such things — I wish you to play chess with my 6 year old.

Be not confined by rules! The only things governing chess in this house here are the laws of physics!

…and even then, not all need apply.

For instance, during a recent game the opposing kings left the board in order to go out on an adventure. They returned later with a large, plastic dragon. The dragon won the game.

Check mate.

« Future Page 1 of 206 Past »