This is a new mode of working. And there's much felt uncertainty. Will I get too distracted to be productive?
But I want to do it for exploring new ways to maximize overall productivity. Especially, I was becoming a bit restless with the one project at hand due to having some other ideas jotted down but not being able to get my hands dirty soon enough. In a way, this is my trying to be more "async"/"non-blocking".
After all, it's good to explore new ways. If it doesn't work, then you either keep modifying the execution until it does, or revert. But you'll never know if you don't try. In theory, context switching does incur some overhead, but when used wisely, different projects can become the refreshing getaway of one another, which will hopefully reduce full coding downtime.
This month, two new projects were opened.
-
A new web app in Elm, in the hope that it'll be the culmination of all my previous Elm experiences, with all the lessons learned and all the refactorings that I couldn't bring myself to do on existing projects; and if things go well, I'm planning to open-source it, i.e. it might become my first open-source project.
-
A web browser extension, but using Elm for state management and UI. I suppose it'll be JS-heavy, but what will that look like, if you can get Elm to fit in nicely?
Besides that, I also continued working on the Android app during the first 10 days, getting the Calendar integration done for the monthly and yearly stats. It's pretty cool, esp. with the "heatmap" view, where I simplified the design to get rid of "axis labeling", namely the month and week headers (because the example code to get them properly aligned was too messy for me), and instead just labeled the first day of each month within the heatmap itself (and didn't bother showing the week header at all). It's not just my being lazy, but I think it reduces visual/cognitive noise, as the point of using heatmap is to offer a all-year-long bird's-eye view, and I think the popular visual arrangement, as seen in GitHub contributions chart, is too verbose.
Elm Done Right
Now this is not about touting, on the contrary, this is to give myself a chance to rectify the technical debt in my previous projects.
-
Doing ports properly, message-passing style. For
Cmdports (from Elm to JS), but more so, forSubports (from JS to Elm). I noticed an asymmetry between the two: it's easy and natural to just declare a single, private outgoing port, and instead expose various higher-level functions that returnCmd msgthat call this port function with some JSON message (typically a tag plus some payload). However, for an incoming port, you've got to expose it/make it public, as it'll be used in thesubscriptions. Therefore, you have to prepare those "reply" messages from the JS side. No big deal, it's just that in the past, I always created multiple incoming ports out of laziness, as I tried to do as little as possible in JS. I'm yet to observe any noticeable performance hit, but subscribing to such a multitude of ports is just wasteful. But on the other hand, I'm not convinced that we should use the absolute minimum number (two) of ports, namely one incoming, and one outgoing. We can do that, but IMO, it causes too much mixup of logic, both Elm and JS side. Even just as a separation/organization of concerns, I prefer to use a pair of ports for a given mechanistic module, e.g., one for IndexedDB, another for LocalStorage, and yet another for WebSocket, etc. So far, I see this more of an art than a science. BTW, I saw that we're going to do lots of message passing in browser extension dev as well, e.g.chrome.runtime.onMessage, so this is definitely not just Elm, people generally agree that it is a more scalable, reliable, and ultimately more human approach to define the behavior of a system. -
Revamping dev setup with newer and older techs. Can esbuild replace the tried and true (but bloaty) Webpack setup? Well, turns out the answer is a definite yes, esp. thanks to how simple an Elm project (using Elm-UI) is. No CSS/SCSS processing, just JS and one index.html. Esbuild is way faster, in terms of bundling/minification, as well as rebuilding. Speaking of which, although it's live reload (watch + serve) feature is very cool, I ended up not bothering to keep it enabled (via a conditional SSE handler), because I realized that the serve mode alone does automatic rebuilding of entry files upon every request (i.e. page reload), and I very much prefer manual to automatic reloading on every file modification. But, for minifying Elm output, UglifyJS still outperforms esbuild big time, because it supports fine-grained configuration of "unsafe" compressions, which Elm's JS output, but not JS code in general, can safely take advantage of. As a result, on my initial demo app, esbuild
--minifyproduced 106 KB of output, while using UglifyJS with custom config, the file was reduced to 79 KB, a 25% improvement! But it took esbuild 62 ms to do it, while it was, I don't know, more than 8 seconds for UglifyJS. But speed is not actually the primary motivation to make the switch. It's a single-binary executable, without having to install lots of npm packages (and the question always is, should I keep them up-to-date?), that is neat. To keep up with this spirit, my build script makes UglifyJS optional - if it's not installed, it falls back to using esbuild to minify Elm output. Speaking of which, I learned some Batch scripting! (Why? Incidentally, at that time, it was more reliable to connect to the internet on my Windows 7 machine, so I thought, let's change things up and see if the old geezer is still useful for coding an Elm project!) And then for a cross-platform version of the build script, I naturally reached for Tcl, something I'd been wanting to brush up for quite a while. Turns out, Batch, however outdated it is (you have to use labels and essentially GOTO, but with some improvements likecallandexit /b, to mimic functions), is pretty concise, like Shell. But the way I see it, since build scripts are short anyway, expressiveness and explicitness are way more valuable, and that's why I believe Tcl is a wonderful language for such tasks. Instead of e.g.(cmd1 || cmd2) && cmd3, which is concise, let's usecatch, or even better,try...trap...on error..., to handle failures explicitly. -
JS deps in repo. Yes, just commit the JS files, no more "package.json". As of now, I find myself only using one fairly lean package, "idb" by Jake Archibald, it's clearly not worth the extra hassle to deal with npm stuff. But as far as I can see, the most straightforward way to get the distribution is to download the tarball from npm, and pick the minimal set of files (plus the license). This strategy is inspired by uBlock Origin. I think this approach works especially well for those lean, zero-dep, stable, and well-made libraries that people rely upon. "idb" is a good example, I mean, shouldn't this be the official API for IDB??
-
Less "IDE". My old Win7 machine could barely handle VS Code, and what's even worse, I noticed that the LSP-based Elm extension wasn't reliable, I ended up still having to run
elm maketo be sure the program was OK (yes, the extension had false negatives). On the other hand, far too often, it didn't allow elm-format to run (on file save) when the code was still a work in progress, whereas a simpler implementation such as the Sublime Text plugin was way more tolerant. Yes, I reverted to writing Elm in Sublime, which did only three things: syntax highlighting, format on save, and auto-complete from existing tokens (just a generic, built-in mechanism). Result: 1) I did have to type more this way, and 2) I started to use the compiler as a dev tool again! And you know what, I just felt much less distracted with this minimal, IDE-less setup. I don't particularly hate IDE (where I have to use one, like in Android dev). But in a simplified workflow like this, I actually feel more empowered, because it's easier to get into the flow when less of "prompting" is pushed to me (and less do I have to decide whether to accept or reject the suggestions, or pick one amongst many). Maybe those suggestions do help more than they harm when you're working with a huge API (e.g. again, in Android dev). But Elm, or Tcl, isn't designed like this. In a way, it's quite analogous to natural language writing, as a matter of fact, I never liked using Gmail's suggestions when I tried it once, and I still can't imagine writing a letter this way without being constantly distracted. I mean, it could be helpful to language trainees, or those who have to give repetitive replies regularly. But when I'm expressing myself, I tend to just want to be left alone (and stay focused). And in general, do I want AI, even the best of breed, to finish my sentences, or lead my next thought? It depends. Some part of programming, or writing, can be boring, I mean, "mechanical" (of course, unless it happens to be your thing). But what I believe is, developing an adequate level of fluency is necessary to become capable of creative thinking in that particular medium. And being self-sufficient goes a long way toward that. I used to frequently look things up in dictionaries while writing, because I was trying to be a good student/learner of the language. Dictionaries are good, generally speaking. It's just that now I've realized that what's more important than properly using language constructs is the ability to readily convert raw thoughts into text. All the improvements can come later, like the refactoring process. Heck, maybe that following part will become the job of AI in the future. OK, I'm not saying we should all start brain dumping and just writing shitty code or prose. Rather, I think I need to seriously work on this shift of emphasis from being pedantic to being fluent, and that will only come with more writing, more deliberate practices, without distraction. I know, this is still vague and far from an operational technique, but I'm starting to see that writing and programming are more similar than they differ now. The fact that AI is already quite good at both of these tasks seems to support this view.
Web Tooling Evolved
As I mentioned, esbuild replacing Webpack is a breath of fresh air. It's essentially a re-implementation of what's already available and mature, so the change isn't necessarily characterized as revolutionary, but as Obama liked to put it, "Better is good!"; and in this case, the performance gain is really good. Same goes with Tauri, which I also tried for the first time on the new Elm project. And it just worked, even on Windows 7, where I installed Edge-based WebView2 (v109, the final version for the end-of-support OS). All the app data (including IDB) were located in an independent directory under %LocalAppData% (but shouldn't it be in Roaming thou?), and both the dev and prod build store IDB data under "EBWebView/Default/IndexedDB/", but the former has the origin string http_127.0.0.1_8000, whereas the latter https_tauri.localhost_0, both folders suffixed with .indexeddb.leveldb. So apparently, in prod build, Tauri runs it own HTTPS-enabled server, whereas in dev mode, it simply fetches from whatever local dev server is being used (in my case, esbuild in serve mode).
A few years ago I tried Electron on Arrow as a potential means of distribution, but it didn't work out because file:// protocol wasn't supported by Elm if the app was a Browser.application (because of elm/url). (So apparently, Electron didn't have a localhost server like Tauri does?) Anyway, after that, whenever possible, I'd just use Browser.document instead, namely giving up on Browser.Navigation (i.e. stuff from the History API). Later, during the time working on the chat app, I realized that URL wasn't actually necessary as the backing mechanism for routing persistence, sure, if you need to share it with people via the web, then URL is the only viable way to go, but if you're making an app that has routes, or a "nav graph" in Android jargon, and maybe also remembers its last route (or even recent N routes), you can implement it without using URL path or fragment, for instance, using LocalStorage as the persistence mechanism.
URL linking is one of the neatest simple ideas of the web, but if what we are really striving for is a good app experience, something on par with the desktop or mobile UX but with just the web techs, then I guess relying on the History API with this "Back/Forward" button clicking mechanism isn't necessarily optimal. Definitely something to experiment more on in the future.
Browser Extension Intro
Back in my school days I made a few super simple Firefox extensions, but that was before I knew anything about app architecture, so this is actually my proper intro. And now Manifest V3 is becoming less... edgy, I think it's a good time to get into that. Overall I think it's embracing the right kind of philosophy.
-
Event-driven, using service worker to replace the traditional background script, so that it can go into this "standby" (inactive) mode when no event of interest is being emitted. Nice.
-
Declarative, for network request and content-dependent action. Yet to get my hands dirty on this, esp. to see if/how to get the rules dynamically loaded into the runtime (instead of using hardcoded files). But this is a good thing in principle.
-
chrome.offscreen. Not sure yet, can this be used to load an arbitrary web page (maybe via iframe?), for the purpose of DOM scraping/parsing (e.g. to monitor content changes)? It requires Chrome 109+ (recall the final Edge version on Win7), and isn't available on Firefox yet.
But actually I'm only doing this highly experimental project for the sake of testing the viability of Elm integration. So far, things seem to work, and in particular, because an extension typically consists of multiple HTML pages (e.g. popup, data manager, dashboard, and settings), we're again dealing with an multi-app situation. And that's where I think the elm-lang.org project offers a great inspiration for a simple yet flexible code organization that works well for browser extension dev. If you look at the "src" directory, you'll see there's no "Main.elm" file. It turns out, every elm file in the "pages" directory is one independent Main file, which may use the modules in the "src" directory, so that's like where the shared libraries are, and I prefer to call it the "modules" directory. Also note that in "elm.json", we only need to include this "modules" directory in the source-directories list, because again, files in "pages" are simply entry points that you pass directly into the Elm compiler, so you don't have to tell the compiler where to find them. So thank you, Evan, for coming up with yet another simple yet effective design.
Speaking of which, this is yet another situation where the simplified Elm dev setup aforementioned shines. The LSP-based VS Code extension unfortunately doesn't support such esoteric project structures, but if all you have is a build script that calls elm make, plus simple elm-format automation, you get vastly expanded flexibility and thus you can adapt Elm to tackle more types of problems, like how to write more reliable and maintainable browser extensions. The thing is, I don't miss the LSP features that much, after all, the Elm compiler is "the guy". If anything were to be done about Elm's IDE story, it'd be about how to get the compiler to communicate with the editor in a more streamlined API. Rust and Kotlin are such examples.