NHacker Next
- new
- past
- show
- ask
- show
- jobs
- submit
login
▲I'm done making desktop applications (2009) (self.__VINEXT_RSC_CHUNKS__=self.__VINEXT_RSC_CHUNKS__||[];self.__VINEXT_RSC_CHUNKS__.push("2:I[\"aadde9aaef29\",[],\"default\",1]\n3:I[\"6e873226e03b\",[],\"Children\",1]\n5:I[\"bc2946a341c8\",[],\"LayoutSegmentProvider\",1]\n6:I[\"6e873226e03b\",[],\"Slot\",1]\n7:I[\"3506b3d116f7\",[],\"ErrorBoundary\",1]\n8:I[\"a9bbde40cf2d\",[],\"default\",1]\n9:I[\"3506b3d116f7\",[],\"NotFoundBoundary\",1]\na:\"$Sreact.suspense\"\n:HL[\"/assets/index-BLEkI_5r.css\",\"style\"]\n") target="_blank">kalzumeus.com)
Rendered at 08:13:01 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
There are some things that NATURALLY lend themselves to a website - like doctor's appointments, bank balance, etc - but it's still a pain when, on logging in to "quickly check that one thing" that I finally got the muscle memory down for because I don't do it that often, I get a "take a quick tour of our great new overhauled features" where now that one thing I wanted is buried 7 levels deep or something, or just plain unfindable.
For something like Audacity (the audio program), how the heck does it make sense to put that on a website (I'm just giving a random example, I don't think they've actually done this), where you first have to upload your source file (privacy issues), manipulate it in a graphically/widget-limited browser - do they have a powerful enough machine on the backend for your big project? - then download the result? It's WAY, WAY better to be able to run the code on your own machine, etc. AND to be stable, so that once you start a project, it won't break halfway through because they changed/removed that one feature your relied upon (no, not thinking of AI at all, why do you ask? :-)
Yeah, but as a maintainer it's the opposite, isn't it? I don't have to worry about supporting version current - 3 in the Polish version of Windows because you're always running the version I've deployed in the environment I've deployed it in (I mean, yes, I'm oversimplifying given the frontend component, but that's still a much smaller surface).
I understand it was just an example, but you'd be surprised how far browsers have come along with technologies like Web Assembly and WebGL. Forget audio editing, you can even do video editing - without uploading any files to the remote server[1]. All the processing is done locally, within your browser.
And if you thought that was impressive, wait till you find out that you can can even boot the whole Linux kernel in your browser using a VM written in WASM[2]!
But I do agree with your points about lack of feature stability. I too prefer native apps just for the record (but for me, the main selling points are low RAM/CPU/disk requirements and keyboard friendliness).
[1] https://news.ycombinator.com/item?id=47847558
[2] https://joelseverin.github.io/linux-wasm/
And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
Putting aside the video editing example for a bit, consider the photo editing web app Photopea, which is an excellent alternative to Adobe Photoshop. Linux is in urgent need of a Photoshop-like editor (and no, GIMP doesn't cut it), but Photopea does a decent enough job for many amateurs and even some pros. For a lot of these folks, Photoshop is one of the last things stopping them from switching to Linux, so apps like Photopea fill that gap. And guess what, Photopea works great on Android too.
Another use case is restricted environments where you can't easily find and install apps, eg immutable distros, or work computers. I use Photopea on my work PC quite regularly for light editing, because MS Paint sucks, and my role doesn't really justify going thru the hassle of getting the approvals to get an editor installed. So like it or not, web apps have their place.
How is Photopea better than GIMP? How is it better than Krita?
As for Krita, its UI is of course a lot better than GIMP, but unfortunately it's mostly skewed towards digital illustration and art creation (and it's great at it!), but less towards photo editing/image manipulation.
- Photopea has the best .PSD support of the three, which is pretty crucial for people wanting to switch from Photoshop.
- Possibly the most important feature that Photoshop users depend on these days is Content-Aware Fill and Magic Replace for object removal and background patching. GIMP lacks native functionality for this (although there are thirdparty plugins, but I haven't used them so can't comment on that). As for Krita, once again Krita lacks these tools - and most retouching tools in fact - as it's more geared towards digital art creation rather than image manipulation.
It's the issue of friction. Also, good webapps are often _better_ than native apps, as they can support tabs.
> And if this is such a compelling value proposition for full-featured desktop productivity applications, why didn't Java Web Start set the world on fire?
Because it relied on Java and SWING, which were a disaster for desktop apps.
All the native apps I use support tabs, its a basic feature of the macOS windowing APIs https://developer.apple.com/documentation/appkit/nswindowtab...
https://pikimov.com/
I grew up reading his writings and learned pretty quickly to read them as "this is what I'm thinking right now in my life" even though they're written more as authoritative and decisive writings from an expert. Over time he's gone from SEO expert to $30K/week consulting expert to desktop app expert to indie SaaS expert to recruiting industry expert to working for Strip Atlas. It was fun to read his writings at each point, but after so many changes I realized it was better to read it as a blog of ongoing learnings and opinions, not necessarily as retrospective wisdom shared from years of experience on the topic even if that's what the writing style conveys.
So I agree that the advice in the post should be taken entirely in context of pursuing the specific goals he was pursuing at the time. The less your goals happen to align, the less relevant the advice becomes.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
Most things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
Desktop publishing.
Brokerage apps (some are webapps but many ship an actual desktop app).
And yet, to me, something changed: I still "install apps locally", but "locally" as in "only on my LAN", but they can be webapps too. I run them in containers (and the containers are in VMs).
I don't care much as to whether something is a desktop app, a GUI or a TUI, a webapp or not...
But what I do care about is being in control.
Say I'm using "I'm Mich" (immich) to view family pictures: it's shipped (it's open source), I run it locally. It'll never be "less good" than it is today: for if it is, I can simply keep running the version I have now.
It's not open to the outside world: it's to use on our LAN only.
So it's a "local" app, even if the interface is through a webapp.
In a way this entire "desktop app vs webapp" is a false dichotomy, especially when you can have a "webapp (really in a browser) that you can self-host on a LAN" and then a "desktop app that's really a webapp (say wrapped in Electron) that only works if there's an Internet connection".
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
If development ends at a git push and users are left to build/fend for themselves (granted this is a lot of open source), then yeah not much difference, but if you're building and packaging it up for users (which you will more likely to be doing if your project is an app specifically) then the difference is massive.
Times have changed quite a bit from nearly 20 years ago.
And his point about randomly moving buttons to see if people like it better?
No fucking thanks. The last thing I need is an app made of quicksand.
The user interface is your contract with your users: don't break muscle memory! I would ditch FF-derivatives, but I'm held hostage by them because the good privacy browsers are based on FF.
Stop following fads! Be like craigslist: never change, or if you do then think long and hard about not moving things around! Also if you're a web/mobile developer, learn desktopisms! Things don't need to be spaced out like everything is a touch interface. Be dense like IRC and Briar, don't be sparse like default Discord or SimpleX! Also treat your interfaces like a language for interaction, or a sandbox with tools; don't make interfaces that only corral and guide idiots, because a non-idiot may want to use it someday.
I really wish Stallman could be technology czar, with the power to [massively] tax noncompliance to his computing philosophy.
No, it's a concern if you care about impact. Improving commercial profits is one kind of impact that is relevant to for-profit corporations, but there is also impact like "improving user privacy" or "helping lower-income people manage their finances with a free-as-in-beer product". This impact can be measured and the feedback can be used to improve the product according to non-profit, non-commercial goals.
There are also people who build open-source software as a hobby and couldn't give two shits whether other people use it or not. More power to them. For those people, you are correct. https://book.iced.rs/philosophy.html comes to mind.
Then there are projects like Streisand (maybe a bad example, I see it has since been archived, but it came to mind) that want to change the world in some way. Those projects very much do need to care about metrics like, how many people are downloading the software, are people opening GitHub issues, are we obscure or is our target audience talking about us, hopefully positively but if not, how can we improve that? Value must always be worth the cost (even when the code is free, it must be worth the time to download, give it a try, give it CPU/RAM, maintain/upgrade the installation) - are we giving users value or are they churning?
It might blow your mind but even non-profits hire people with MBAs (and universities offer programs for MBAs that focus on non-profit management), precisely because some organizations focus on non-financial impact.
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
Counter-counterpoint: Maybe it's time to require professional engineer certification before a software product can be shipped in a way that can be monetized. It's to filter devs from the industry who look at browsers today and go "Yeah, this is a good universal app engine."
Maybe useful higher-level elements like layout, typography, etc. could be shared as frameworks.
There are many alternate histories where a different base application layer (app engine) could have been designed for the web (the platform)
The impact on people's time, money and on the environment are proportional.
Does it? Have you compared a web app written in a sufficiently low level language with a desktop app?
And if we're talking about simple GUI apps, you can run them in 10 megabytes or maybe even less. It's cheating a bit as the OS libraries are already loaded - but they're loaded anyway if you use the browser too, so it's not like you can shave off of that.
What about in QML, which uses Web technologies like CSS, JS and even basic HTML? The whole KDE Plasma 6 desktop is built around these technologies now and I (and many others) consider it light and high-performance.
If you saddle up those technologies in the full browser everything then it will get larger, yes, but nothing requires you to do this, just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed.
Plain Javascript can be very fast and still come at relatively low resource demands and the same is true of HTML and CSS. Many "plain desktop-native" applications often end up reinventing their own variants of HTML and CSS in the course of designing the U/I anyways.
Qt is much lighter than your Chromium-based stacks but all the waste kind of adds up.
"just as nothing requires providing your app as a full-fat Fedora install when a distroless container would have sufficed" Containers are hungrier than running stuff on bare metal...
> Containers are hungrier than running stuff on bare metal...
Containers are tremendously lightweight compared to VM. You might as well point out that running a full multiuser security-protected OS like Linux is hungrier than running on bare metal with DOS too. It's just as true, and even proportionally as true.
In any event a full Fedora container with all packages installed is going to be tremendously larger than a distroless hello-world "built" around Alpine, for instance, even though they both use container technologies. Same applies to Web technologies, you can certainly go and easily add a lot of waste using them but they are not themselves inherently wasteful.
A desktop app may consume more, but it's heavily focused on one thing, so a photo editor don't need to bring in a whole sound subsystem and a live programming system.
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
You've reminded me of the XKCD comic about standards: https://xkcd.com/927/
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
That's a terrible solution that preserves nothing. Try using a screen reader with an app rendered onto a rectangle.
As a user, properly implemented desktop interface will always beat web. By properly, I mean obeying shortcut keys and conventions of the desktop world. Having alt+letter assignments to boxes and functions, Tab moves between elements, pressing PageUp/PageDown while in a text entry area for a chat window scrolls the chat history above and not the text entry area (looking at you SimpleX), etc.
Sorry, not sorry. Web interface is interface-smell, and I avoid it as much as possible. Give me a TUI before a webpage.
Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
Art works without popular appeal can become highly treasured by some.
Open source software doesn't have to be ambitious to be worthwhile and useful. It can be artful, utilitarian or a artifact of play. Commercial standards shouldn't be the only measure of good software.
Good! It's not for them! They can stay paypigs on subscription because they can't git gud!
1-4. Google, find, read... this is the same for web apps. 2. Click download and wait a few seconds. Not enough time to give up because native apps are small. Heavy JS web apps might load for longer than that. 3. Click on the executable that the browser pops up in front of you. No closing the browser or looking for your downloads folder. It's right there! 3.5. You probably don't need an installer and it definitely doesn't need a multi-step wizard. Maybe a big "install" button with a smaller "advanced options". 3.6. Your installer (if you even have it) autostarts the program after finishing 4. The user uses it and is happy. 5. Some time later, the program prompts the user to pay, potentially taking them directly onto the payment form either in-app or by opening it in a browser. 6. They enter their details and pay.
That's one step more than a web app, but also a much bigger chance the user will come back to pay (you can literally send them a popup, you're a native app!).
I wonder whether Google, in its Don't Be Evil era, ever considered what they should do about software piracy, and what they decided.
I'd guess they would've decided to either discourage piracy, or at least not encourage it.
In the screenshot, the Google search query doesn't say anything about wanting to pirate, yet Google is suggesting piracy, a la entrapment.
(Though other history about that user may suggest a software piracy tendency, but still, Google knows what piracy seeking looks like, and they special-case all sorts of other topics.)
Is the ethics practice to wait to be sued or told by a regulator to stop doing something?
Or maybe they anticipate costs and competition for how they operate, and lobby for the regulation they want, so all they have to do is be compliant with it, and be let off the hook for lawsuits?
It is plundering those who didn't pay you for legal immunity.
Google's revenue model is and has always been web first. The more business happening on the web, the better it is for Google writ large, especially back when competing with Microsoft was a larger priority in that space.
It's much harder to pirate a web app, for obvious reasons, than a desktop app. Desktop apps being easy to pirate shifts professional software developers on the margin towards more web apps, which means more commercial activity centered on the web, which is is good for Google. So one could imagine pretty good business reasons to be at least blasé on the topic.
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
The public already demonstrated that they adopted, misused and weaponized the maxim. Its retirement just sharpened the edge of that weapon. Now instead of "What happened to don't be evil?" it's become "Of course Google is being evil." and everything exists in that lens.
Tech industry culture today is pretty much finance bro culture, plus a couple decades of domain-specific conditioning for abuse.
But at the time Google started, even the newly-arrived gold rush people didn't think like that.
And the more experienced people often had been brought up in altruistic Internet culture: they wanted to bring the goodness to everyone, and were aware of some abuse threats by extrapolating from non-Internet society.
Google's "don't be evil" was a way for them to say "we're regular Joes, just like you; we're not Microsoft, and we're not going to do bad stuff like they do".
And if it were the altruistic Internet people they hired, the slogan/mantra could be seen as a reminder to check your ego/ambition/enthusiasm, as well as a shorthand for communicating when you were doing that, and that would be respected by everyone because it had been blessed from the top as a Prime Directive.
Today, if a tech company says they aspire not to be evil: (1) they almost certainly don't mean it, in the current culture and investment environment, or they wouldn't have gotten money from VCs (who invest in people motivated like themselves); (2) most of their hires won't believe it, except perhaps new grads who probably haven't thought much about it; and (3) nobody will follow through on it (e.g., witness how almost all OpenAI employees literally signed to enable the big-money finance-bro coup of supposedly a public interest non-profit).
For example, my impression at the time was that people thought that Google would be a responsible steward of Usenet archives:
https://en.wikipedia.org/wiki/Henry_Spencer#Preserving_Usene...
FWIW, it absolutely was believable to me at the time that another Internet person would do a company consistent with what I saw as the dominant (pre-gold-rush) Internet culture.
For example of a personality familiar to more people on HN, one might have trusted that Aaron Swartz was being genuine, if he said he wanted to do a company that wouldn't be evil.
(I had actually proposed a similar corporate rule to a prospective co-founder, at a time when Google might've still been hosted at Stanford. Though the co-founder was new to Internet, and didn't have the same thinking.)
Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
https://www.successfulsoftware.net
Your employer most likely has.
I'd wager there are more people paying for software for their smart phone than any other platform they use.
I also remember people citing performance as a reason YouTube switched from Flash to HTML5. Searching those blogs now is giving a lot of 404s. Like I said this should've helped since it's video, but somehow YouTube immediately got slower anyway back then. Back then I installed an extension to force it to use QuickTime Player for that reason.
The proprietary and insecure parts were real problems too. I'm fine with the decisions that were made, but this was a drawback.
I'm done making web apps (2026).
seriously desktop apps kinda own i just desktop-app'd a pwa made it do SSO auth at my org and now its just part of the self-serve application download kiosk and we're laughing at all the pain we've endured for so many years writing up proposals and billing to scale up web app infra for internal tooling and stuff.
im kinda enjoying coming back to earth right now with my team and we're just hmmmmmmm'ing a lot of things like this. we've had devops chasing 23498234892% availability with k8s and load balancers and all this stuff and we're now assessing how much of that cruft was completely unnecessary and made everything some amorphous blob of complexity and unpredictable billing & and really gave devops a moat to just say "no" to so many things that came through the pipeline. there's so many things that can just be dragged back to like an actual on premise machine and served up through the internal network. we are... amused at how self-important we made ourselves out to be this past decade.
we're probably like days worth of goofing away from going to buy a few mac minis and plug it into some uninterruptable power supplies and just seeing how un-serious we can get with so much tooling we've built over the years. and for everything else, desktop apps. seriously desktop apps is like free infrastructure if you build it right.
That's a job for a web page. It doesn't need to be installed.
If anything, it’s my very faint hope that AI would give companies - especially non-software companies - the bandwidth to release two real native apps instead of just 2 builds of a shitty Electron app. Fat chance though, I think, not least because companies love to use their “bRaNdInG” on everything - so the native look and feel a real app gives you “for free” is a downside for the clowns that do the visual design for most companies.
Entry suggestions/completions are formally deprecated with no replacement since 2022. When I did get them working on the deprecated API there was an empty completion option that would segfault if clicked. The default behaviour didn’t hide completions on window unfocus, so my completions would hover over any other open window. There was seemingly no way to disambiguate tab vs enter events… it just sucked.
So after adding one widget I abandoned the project. It felt like the early releases of SwiftUI where you could add a list view but then would run into weird issues as soon as you tried adding stuff to it.
Similarly trying to build an app for macOS practically depends on Swift by Sundell Hacking with Swift or others to make up for Apple’s lack of documentation in many areas. For years stuff like NSColor vs Color and similar API boundaries added friction, and the native macOS SwiftUI components just never felt normal while I tried making apps.
As heavy as web libraries and electron are, at least work mostly out of the box.
QtWidgets is extremely good though, even if it is effectively in maintenance mode.
Avalonia also seems good too though I haven't used it myself.
For prototyping / one-offs, I've always enjoyed working in Tcl/Itcl and Tk/Itk - object oriented Tcl with a decent set of widgets. It's not going to set the world on fire, but it's pretty portable (should mostly work on every platform with minor changes), has a way to package up standalone executables, can ship many-files-as-one with an internal filesystem, etc..
Of course, I spent ~15 years in EDA, so it's much more comfortable than for most people, but it can easily be integrated into C/C++ as well with SWIG, etc.
In the near future I need to lash up a windows utility to generate a bunch of PDF files from a CSV (in concert with GhostScript), with specific filenames. I was trying to figure out the best approach and hadn't even considered Tcl and Tk - with Itcl you might have just given me a new rabbithole to explore! Thanks! (...I think!)
QCanvas (or was it QGraphicsCanvas?) has long since been replace with QGraphicsScene, which is much more capable and doesn't suffer from pixelation issues.
Anthropic has the resources of a fully armed and operational Claude Mythos (eyeroll), but they still choose to shit out an electron app on all of their users instead of going native like their competitors have done.
If your product targets a segment that expects a desktop app, do that. Web app, do that. Phone app, do that.
Something like this would have worked if it was still back in the Walmart bargain software shelf where people could impulse buy a CD, put it into their computer and have it automatically start and install, then show up on the desktop. Despite that being less common now, it was more streamlined in a way for many users.
Many of those people probably aren't logged into Steam or Windows Store either, so you have to do your own thing. It makes sense that web is the least friction for those people.
That's not true at all, any number of things could have killed bitcoin in its infancy. The stakes were just low. Somewhere out there is a lost collection of wallets of mine, collectively holding ~100btc ($1000 at the time). If regulators cracked down hard, that 100btc would have become just as worthless and either way I'd be out $1000.
"Risk" is an epistemic claim about the future taking the worse path. Obviously looking back it looks like risk-free money. That's not how it looked at the time. The "currency of the future" thing was always niche, especially after the crash in 2013, until a much larger cultural shift happened around 2015-ish.
Plenty of people will chime in with early bitcoin stories, and how they always believed it was going to go to the moon, etc. I always find it curious because my memory of the time period is that it was a means to an end, and that's how the overwhelming majority saw it and treated it. Funnily enough, it was thanks to that overwhelming majority that led to it being worth anything at all. If it was just a bunch of yahoos clamoring about the "currency of the future" thing, it probably would have gone absolutely fucking nowhere. The irony that the yahoos ended up becoming the majority I think is underappreciated.
Just. Don't. Subscribe.
Simple!
People who focus this much on "conversion" et al are dinosaurs who deserve extinction.
More importantly, the author is talking about the realities of trying to earn a decent living shipping independent software. That requires paying customers.
It's perfectly reasonable to want to be paid for your work, and it certainly doesn't warrant the vitriol in your comment.
"Over roughly the same period my day job has changed and transitioned me from writing thick clients in Swing to big freaking enterprise web apps."
I mean, the web kind of won. We just don't have a simple and useful way to design for the web AND the desktop at the same time. I also use the www of course, with a gazillion of useful CSS and JavaScript where I have to. I have not entirely given up on the desktop world, but I abandoned ruby-gtk and switched to ... jruby-swing. I know, I know, nobody uses swing anymore. The point is not so much about using swing per se, but simply to have a GUI that is functional on windows, with the same code base (I ultimately use the same code base for everything on the backend anyway). I guess I would fully transition into the world wide web too, but how can you access files on the filesystem, create directories etc... without using node? JavaScript is deliberately restricted, node is pretty awful, ruby-wasm has no real documentation.
ok, now do this analysis for mobile apps...
To save you a click: It's harder to monetize desktop apps than webapps.
Lol. LMAO, even.
ig remote work is the best of both worlds
There's an interview with him on the subject that is sadly behind a paywall now: https://www.indiehackers.com/post/how-i-grew-my-appointment-...
The world has changed a lot since then. The days where 37 Signals could build an empire out of simple web form apps and individuals could build and sell a SaaS that sends reminder texts are long gone. Most of the low hanging fruit was mined out long ago and most simple services have seen 100 different startups try to serve them already.
As much as Appointment Reminder was my prime example of a successful indie SaaS, the author's second startup has (with all due respect) become one of my prime examples of not validating product-market fit before building your product. They went on to build Starfighter, a company that was supposed to be a candidate vetting platform where people could do complex coding challenges and then get matched up with companies wanting to hire people. It was built partially in the open through their newsletter and in Hacker News posts.
If you thought doing LeetCode problems to get interviews was annoying, imagine having to spend hours or days going through a CTF where you hack multi-core CPUs to do something complex with a simulated stock market. I can't even remember the entire premise, but every time I read something about the company it was getting more and more complex. At the same time I was on other forums where candidates were going the opposite direction: becoming frustrated with the proliferation of coding interviews and refusing to do interview challenges that would take hours of their time.
I remember through the entire process thinking that it seemed like a questionable business plan that wouldn't really appeal to companies or to candidates. Even the Hacker News comments were full of (surprisingly polite) feedback saying that investing a lot of hours into solving programming puzzles to maybe get some recruiter interest wasn't appealing - https://news.ycombinator.com/item?id=10480390
Some amazing foreshadowing in that thread from one of the co-founders (not Patrick McKenzie):
> I literally lack the ability to form coherent sentences about our business that don't somehow involve how to render a graph of AVR basic blocks in a React web app, is how little we're thinking about how the game interacts with recruiting right now.
> We are going to get the CTF right, and then work from there to a sustainable recruiting business. We should have done it the other way around, but we didn't. :)
As you might have guessed, it didn't work out at all. It was weird for me to follow one of my indie startup heroes on their journey into their second business that skipped all of the normal startup advice and then reached the exact conclusion that advice was warning against.
It was enlightening to follow along and I'm glad they tried something different and shared it along the way, but watching it happen was a turning point for me in how I approach advice from any one individual author, blogger, writer, or influencer.
The idea that previous business success only weakly predicts future business success, and that that correlation probably becomes even weaker as one tries things increasingly far from the perimeter, is one I believe in but can't really trace back to any concrete source, which suggests my worldview just dynamically generates it off the dome in response to this story. I probably have imbibing their arguments over a decade plus to thank for that.
I'm still a big fan of patio11 though. Starfighter is maybe best seen these days as watching a man be professionally slightly embarrassed, then dusting himself off and going on to do a bunch of cool stuff afterwards anyway, weak correlations be damned.