The Enigma of TempleOS
Over a span of 10 years, Terry Davis crafted over 100,000 lines of code to construct an entire PC operating system called TempleOS. This astonishing feat of solo programming rivaled massive commercial and open source projects backed by tech giants. Davis managed to single-handedly create his own compiler, kernel, file system, drivers, and applications.
Terry Davis programming on his TempleOS operating system
Some key facts about TempleOS:
- 100% written by Terry Davis with over 100,000 lines of code
- Custom 16-bit processor with 640x480x16 color graphics
- Running services for file IO, networking, audio/video and more
- Bible quotes and religious references scattered throughout code
So what enabled Davis to pull off this herculean task? He possessed elite programming skills even as an adolescent. While working at ticket processing company Ticketmaster in the 1990‘s, his solution ran 5x faster than a team of IBM engineers.
However, Davis struggled with unmedicated mental illness including schizophrenia and manic episodes which ultimately led him to homelessness. Tragically, he passed away after being struck by a train in 2018.
TempleOS represents a paradox of genius and tragedy – coming painfully close to technical greatness while unable to find stability or commercial success for its creator.
The Hardest Programming Question
Looking past the sensationalism of his story, Terry Davis gives us an opportunity to reflect on a fundamental programming problem through his work on TempleOS:
"How much complexity is appropriate for a given task?"
This question was posed by UgiBugi in a YouTube video analyzing TempleOS and Terry Davis. It succinctly frames the hardest programming design challenge engineers continually face.
Should your code architecture resemble a towering skyscraper or a humble cottage? There are situational needs for both ends of that spectrum and everything in between. Master programmers have a knack for finding the "Goldilocks zone" of balancing essential complexity vs unnecessary complication.
Let‘s examine the real-world tradeoffs of complexity and why this question keeps causing headaches even among the best engineers.
Complexity Tradeoffs in Game Optimization
As an avid gamer, I‘m constantly trying to eke out a few extra FPS (frames per second) by tweaking graphic settings to optimize game performance. The same complexity balancing act comes into play here.
Higher FPS demands more efficient code and systems. That requirement then gets translated to game engine programmers who make under-the-hood optimizations.
For example, complex "Scene Graph" architectures now prevalent in game engines handle rendering gigantic open worlds. Though rendering lush environments like those in Far Cry or Horizon: Zero Dawn requires much deeper code complexity compared to simple 2D arcade classics.
Game | Est. Lines of Code |
---|---|
Pong (1972) | 500 |
Super Mario Bros (1985) | 10,000 |
Far Cry 6 (2021) | At least 5 million |
We can see a clear progression of codebase complexity matching increasing graphical demands over time. But left unchecked, escalating complexity can reach a point of diminishing returns as programs become too convoluted to manage and debug efficiently.
Let‘s examine why this remains an intractable programming problem with no definitive solutions.
Varying Opinions Abound
The necessary level of complexity provokes frequent disagreement among programmers based on their background and experiences. Quotes from renowned engineers reveal the spectrum of opinions on balancing simplicity vs capability:
"Controlling complexity is the essence of computer programming." – Brian Kernighan, creator of AWK and AMPL languages
Kernighan emphasizes keeping complexity in check as an essential coding skill. Contrast that view with security expert Bruce Schneier:
"Complexity is the worst enemy of security."
Schneier indicates too much complexity makes systems harder to secure. Linux creator Linus Torvalds similarly prioritizes simplicity:
"I‘d argue that excessive complexity is the main reason systems are insecure."
On the other end, JavaScript creator Brendan Eich defends flexibility derived from complexity:
"Always design software that handles more complexity than you think it needs today. But do it in a simple & cohesive way."
Ultimately there are reasonable counter-arguments for complexity or simplicity given a particular context. This split between concepts encapsulates why resolving an "appropriate" complexity threshold proves so vexing in practice.
Measuring Complexity Concretely
Despite open debate on appropriate complexity, perhaps objective code measurements could at least quantify software complexity. Some useful metrics include:
Lines of Code (LOC): Total count of lines in a codebase
- Generally correlates with complexity. But varies greatly by language.
Project | Code Lines | Contribs |
---|---|---|
Linux kernel | 27 million | 17K |
React JS library | 35K | 1K |
Cyclomatic Complexity: Quantitative score indicating complexity based on number of execution paths through code
- Useful for measuring complex functions or methods
- Values over 20 generally mark highly complex code
Technical Debt: Shorthand for the implied cost of code cleanup to bring systems up to modern standards
- Measures upcoming effort needed to address complexity
However, these measurements have a high margin of error and don‘t capture real architectural complexity well. Two systems can have a similar LOC or cyclomatic score – yet have drastically divergent levels of conceptual complexity.
There is still no universal formula that yields a definitive complexity rating for all systems.
Key Principles for Managing Complexity
In lieu of fixed complexity quotas, a few helpful principles serve as handy guideposts when navigating simplicity vs capability tradeoffs:
- KISS – Keep It Simple, Stupid
- Adds complexity only for compelling reasons
- YAGNI – You Ain‘t Gonna Need It
- Don‘t overengineer features not yet needed
- Do One Thing Well
- Stay focused on primary purpose vs adding side capabilities
- Refactor Mercilessly
- Constantly reduce complexity accumulated over time
Ruthlessly adhering to these principles helps deter complexity creep that can gradually produce dense hairballs of code.
Of course your mileage may vary depending on language and problem space – graphics programmers may live with higher complexity than CRUD database admins. But keeping complexity constantly in check ultimately aids all programmers.
The Crux of the Conundrum
Why does resolving appropriate complexity remain an endless riddle in programming? Ultimately because simplicity itself is fundamentally subjective. Code readability, architectural elegance, and reasonable complexity manifest differently based on individual outlook.
Much like TempleOS – one programmer‘s streamlined masterpiece may resemble arcane madness to another. There is no universal litmus test for pinpointing peak complexity nor ever will be.
That frustrating reality is precisely why this question persists as the hardest quandary in programming. It reveals how much craftsmanship and taste varies even among brilliant engineers pursuing technical excellence. Debating and demonstrating complexity spectrums play a role in evolving programming philosophies.
Terry Davis and TempleOS serves as a salient embodiment of that puzzle – genius work lauded by some yet utterly impenetrable to outsiders. Computing legends like John Carmack and Linus Torvalds expressed awe at Davis‘ solo accomplishment. But without accessible code or purpose, TempleOS faded into obscurity for mainstream users.
Determining complexity ultimately reduces down to situational judgment calls rather than formulaic solutions. That truth provides endless fodder for healthy disagreements and learning in coding arenas to continually expand our programming wisdom.
The tragic tale of TempleOS and its creator Terry Davis reignites essential conversations on the deepest riddles at the heart of software architecture – while reminding us to uplift humanity amidst technical pursuits.