NHacker Next
- new
- past
- show
- ask
- show
- jobs
- submit
login
▲Bitwarden CLI compromised in ongoing Checkmarx supply chain campaign (self.__VINEXT_RSC_CHUNKS__=self.__VINEXT_RSC_CHUNKS__||[];self.__VINEXT_RSC_CHUNKS__.push("2:I[\"aadde9aaef29\",[],\"default\",1]\n3:I[\"6e873226e03b\",[],\"Children\",1]\n5:I[\"bc2946a341c8\",[],\"LayoutSegmentProvider\",1]\n6:I[\"6e873226e03b\",[],\"Slot\",1]\n7:I[\"3506b3d116f7\",[],\"ErrorBoundary\",1]\n8:I[\"a9bbde40cf2d\",[],\"default\",1]\n9:I[\"3506b3d116f7\",[],\"NotFoundBoundary\",1]\na:\"$Sreact.suspense\"\n:HL[\"/assets/index-BLEkI_5r.css\",\"style\"]\n")rget="_blank">socket.dev)
Rendered at 06:46:34 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
Setting min-release-age=7 in .npmrc (needs npm 11.10+) would have protected the 334 unlucky people who downloaded the malicious @bitwarden/cli 2026.4.0, published ~19+ hours ago (see https://www.npmjs.com/package/@bitwarden/cli?activeTab=versi... and select "show deprecated versions").
Same story for the malicious axios (@1.14.1 and @0.30.4, removed within ~3h), ua-parser-js (hours), and node-ipc (days). Wouldn't have helped with event-stream (sat for 2+ months), but you can't win them all.
Some examples (hat tip to https://news.ycombinator.com/item?id=47513932):
p.s. shameless plug: I was looking for a simple tool that will check your settings / apply a fix, and was surprised I couldn't find one, I released something (open source, free, MIT yada yada) since sometimes one click fix convenience increases the chances people will actually use it. https://depsguard.com if anyone is interested.EDIT: looks like someone else had a similar idea: https://cooldowns.dev
Most of these attacks don't make it into the upstream source, so solutions[1] that build from source get you ~98% of the way there. If you can't get a from-source build vs. pulling directly from the registries, can reduce risk somewhat with a cooldown period.
For the long tail of stuff that makes it into GitHub, you need to do some combination of heuristics on the commits/maintainers and AI-driven analysis of the code change itself. Typically run that and then flag for human review.
[1] Here's the only one I know that builds everything from source: https://www.chainguard.dev/libraries
(Disclaimer: I work there.)
This was supposedly discovered by "Socket researchers", and the product they're selling is proactive scanning to detect/block malicious packages, so I'd assume this would've been discovered even if no regular users had updated.
But I'd claim even for malware that's only discovered due to normal users updating, it'd generally be better to reduce the number of people affected with a slow roll-out (which should happen somewhat naturally if everyone sets, or doesn't set, their cool-down based on their own risk tolerance/threat model) rather than everyone jumping onto the malicious package at once and having way more people compromised than was necessary for discovery of the malware.
Having the forge control it half-defeats the point; the attackers who gained permission to push a malicious release, might well have also gained permission to mark it as "urgent security hotfix, install immediately 0 cooldown".
And no, however compromised packages to the forge happens, that is not the same thing as marking “urgent security hotfix” which would require manual approval from the forge maintainers, not an automated process. The only automated process would be a blackout period where automated scanners try to find issues and a cool off period where the release gets progressively to 100% of all projects that depend on it over the course of a few days or a week.
It's not a lack of care about privacy, the 7 days delay is like a new stage between RC and final release, where you pull for testing but not for production.
But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.
While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.
Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.
A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.
What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.
Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.
Ir seems like if you were at all likely to be giving dependencies the extra scrutiny that discovers a problem, you’d probably know it? Most of the people who upgraded didn’t help, they just got owned.
A cooldown gives anyone who does investigate more time to do their work.
Also, check out the VW Diesel scandal.
Needless to say I’m running all my JS tools in a Docker container these days.
isn't it obvious?
it should be obvious.
why isn't obvious?
In the context of TFA, don't rely on third party github actions that you haven't vetted. Most of them aren't needed and you can do the same with a few lines of bash. Which you can also then use locally.
With pnpm, you can also use trustPolicy: no-downgrade, which prevents installing packages whose trust level has decreased since older releases (e.g. if a release was published with the npm cli after a previous release was published with the github OIDC flow).
Another one is to not run post-install scripts (which is the default with pnpm and configurable with npm).
These would catch most of the compromised packages, as most of them are published outside of the normal release workflow with stolen credentials, and are run from post-install scripts
By contrast, a client-side cooldown doesn't require very much ecosystem or index coordination.
This kind of thinking is why I don't trust the security of open source software. Industry standard security practices don't get implemented because no one is being paid to actually care and they are disconnected from the users due to not making income from them.
(With that said, I think it also varies by ecosystem. These days, I think I can reasonably assert that Python has extended significant effort to stay ahead of the curve, in part because the open source community around Python has been so willing to adopt changes to their security posture.)
There's risk there of a monoculture categorically missing some threats if everyone is using the same scanners. But I still think that approach is basically pro-social even if it involves a "cooldown".
Exceptions to quarantine rules just invites attackers to mark malicious updates as security patches.
If every kind of breakage, including security bugs, results in a 2-3 hour wait to ship the fix, maybe that would teach folks to be more careful with their release process. Public software releases really should not be a thing to automate away; there needs to be a human pushing the button, ideally attested with a hardware security key.
TypeScript on its own is a great language, with a very interesting type system. Most other type systems can’t run doom.
https://simonwillison.net/2025/Feb/27/typescript-types-can-r...
That doesn't sound like a compliment.
I really appreciate that you didn't include a "curl | bash" command to paste for installing, but at the same time it's what I was expecting when I clicked.
I'm pretty sure I saw a comment on HN where the user wrapped all of their npm/pip/etc commands with bubblewrap. I've been thinking of doing something similar and basically just seeing how many of my daily commands I can sandbox. My hunch is that _most_ of them don't need to operate outside the current directory and don't need internet access.
Note the if you get
then comment out the exclude and runI know it's far from watertight (and it's useless if you're working with bitwarden itself), but I hope it blocks the low hanging fruit sort of attacks.
I think this is a bad idea, because it means the permissions of any new folders have to be closely guarded, which is easy to forget.
If you're brave you can run whonix.
The issue is developers who have publish access to popular packages - they really should be publishing and signing on a separate machine / environment.
Same with not doing any personal work on corporate machines (and having strict corp policy - vercel were weak here).
Avoid software that tries to manage its own native (external, outside the language ecosystem) dependencies or otherwise needs pre/post-install hooks to build.
If you do packaging work, try to build packages from source code fetched directly from source control rather than relying on release tarballs or other published release artifacts. These attacks are often more effective at hiding in release tarballs, NPM releases, Docker images, etc., than they are at hiding in Git history.
Learn how your tools actually build. Build your own containers.
Learn how your tools actually run. Write your own CI templates.
My team at work doesn't have super extreme or perfect security practices, but we try to be reasonably responsible. Just doing the things I outlined above has spared me from multiple supply chain attacks against tools that I use in the past few weeks.
Platform, DevEx, and AppSec teams are all positioned well to help with stuff like this so that it doesn't all fall on individual developers. They can:
I think there's a lot of things to do here. The hardest parts are probably organizational and social; coordination is hard and network effects are strong. But I also think that there are some basics that help a lot. And developers who serve other developers, whether they are formally security professionals or not, are generally well-positioned to make it easier to do the right thing than the sloppy thing over time.An alternative hypothesis: what if 7-day cooldowns incentivize security scanners, researchers, and downstream packagers to race to uncover problems within an 7-day window after each release?
Without some actual evidence, I'm not sure which of these is correct, but I'm pretty sure it's not productive to state either one of these as an accepted fact.
Either way there will be fewer eyes on it.
Many companies exist now whose main product is supply chain vetting and scanning (this article is from one such company). They are usually the ones writing up and sharing articles like this - so the community would more than likely hear about it even if nobody was actually using the package yet.
> This plan works by letting software supply chain companies find security issues in new releases. Many security companies have automated scanners for popular and less popular libraries, with manual triggers for those libraries which are not in the top N.
You're still pulling a lot of dependencies. At least they're pinned though.
https://lib.rs/crates/rbw
Takes what, maybe 15 seconds to compile on a high-core machine from scratch? Isn't the end of the world.
Worse is the scope to have to review all those things, if you'd like to use it for your main passwords, that'd be my biggest worry. Luckily most are well established already as far as I can tell.
326 packages is approximately 326 more packages than I will ever fully audit to a point where my employer would be comfortable with me making that decision (I do it because many eyes make bugs shallow).
It's also approximately 300 more than the community will audit, because it will only be "the big ones" that get audited, like serde and tokio.
I don't see people rushing to audit `zmij` (v1.0.19), despite it having just as much potential to backdoor my systems as tokio does.
Chance of someone auditing all of them is virtually zero, and in practice no one audits anything, so you are still effectively blindly trusting that none of those 326 got compromised.
Cargo is modeled after NPM. It works more or less identically, and makes adding thousands of transient dependencies effortless, just like NPM.
Rust's stdlib is pretty anemic. It's significantly smaller than node's.
These are decisions made by the bodies governing Rust. It has predictable results.
It's just that it you give developers tools, they will absolutely abuse them to the highest extent. The problem with cargo is it's too good, so of course devs are gonna be pulling hundreds of dependencies. It's also why things like Claude Code have so much potential for shitty outcomes. Developers are lazy to the n-th degree. In fact, being a developer is predicated on being lazy. Laziness is the whole motivation behind software!
Ultimately in any language you get the sort of experience you build for yourself with the environment you setup, it is possible in most languages to be more conservative and minimal even if the ecosystem at large is not, but it does require more care and time.
The Rust vs. Node comparison seems very shallow to me, and it seems to require a lot of eye squinting to work.
People have beef with Rust in other, more emotional ways, and welcome the opportunity to pretend they dislike it on seemingly-rational grounds a la "Node bad amirite lol".
I am openly admitting I don't care. Such libraries are in a huge demand and every programming language ecosystem gains them quite early. So to me the risk of malicious code in them is negligibly small.
TL;DR, the official libraries are going to be split into three parts:
---
1) `core.*` (or maybe `lang.*` or `$MYLANGUAGE.*` or w/e, you get the point) this is the only part that's "blessed" to be known by the compiler, and in a sense, part of the compiler, not a library. It's stuff like core type definitions, interfaces, that sort of stuff. I may or may not put various intrinsics here too (e.g. bit count or ilog2), but I don't know yet.
Reserved by the compiler; it will not allow you to add custom stuff to it.
There is technically also a "pseudo-package" of `debug.*` ("pseudo" in the sense that you must always use it in the full prefixed form, you can't import it), which is just going to be my version of `__LINE__` and similar. Obviously blessed by compiler by necessity, but think stuff like `debug.file` (`__FILE__`), `debug.line` (`__LINE__`), `debug.compiler.{vendor,version}` (`__GNUC__`, `_MSC_VER`, and friends). `debug` is a keyword, which makes it de-facto non-overridable by users (and also easy for both IDEs and compiler to reason about). Of course I'll provide ways of overriding these, as to not leak file paths to end users in release builds, etc.
(side-note: since I want reproducible builds to be the default, I'm internally debating even having a `debug.build.datetime` or similar ... one idea would be to allow it but require explicitly specifying a datetime [as build option] in such cases, lest it either errors out, or defaults to e.g. 1970-01-01 or 2000-01-01 or whatever for reproducibility)
---
2) `std.*`, which is minimal, 100% portable (to the point where it'd probably even work in embedded [in the microcontroller sense, not "embedded Linux" sense] systems and such --- though those targets are, at least for now, not a primary goal), and basically provides some core tooling.
Unlike #1, this is not special to the compiler ... the `std.*` package is de jure reserved, but that's not actually enforced at a technical level. It's bundled with the language, and included/compiled by default.
As a rule (of thumb, admittedly), code in it needs to be inherently portable, with maybe a few exceptions here or there (e.g. for some very basic I/O, which you kind of need for debugging). Code is also required to have no external (read: native/upstream) dependencies whatsoever (other than maybe libc, libdl, libm, and similar things that are really more part of the OS than any particular library).
All of `std.*` also needs to be trivially sandboxable --- a program using only `core.*` & `std.*` should not be able to, in any way, affect anything outside of whatever the host/parent system told it that it can.
---
3) `etc.*`, which actually work a lot like Rust/Cargo crates or npm packages in the sense that they're not installed by default ..... except that they're officially blessed. They likely will be part of a default source distribution, but not linked to by default (in other words: included with your source download, but you can't use them unless you explicitly specify).
This is much wider in scope, and I'm expecting it to have things like sockets, file I/O (hopefully async, though it's still a bit of a nightmare to make it portable), downloads, etc. External dependencies are allowed here --- to that end, a downloads API could link to libcurl, async I/O could link to libuv, etc.
---
Essentially, `core.*` is the "minimal runtime", `std.*` is roughly a C-esque (in terms of feature count, or at least dependencies) stdlib, and `etc.*` are the Python-esque batteries.
Or to put it differently: `core.*` is the minimum to make the language run/compile, `std.*` is the minimum to make it do something useful, and `etc.*` is the stuff to make common things faster to make. (roughly speaking, since you can always technically reimplement `std.*` and such)
I figured keeping them separate allows me to provide for a "batteries included, but you have to put them in yourself" approach, plus clearly signaling which parts are dependency-free & ultra-sandbox-friendly (which is important for embedding in the Lua/JavaScript sense), plus it allows me to version them independently in cases of security issues (which I expect there to be more of, given the nature of sockets, HTTP downloads, maybe XML handling, etc).
Using crates is a choice. You can write fully independent C++ or you can pull in Boost + Qt + whatever libraries you need. Even for C programs, I find my package manager downloading tons of dependencies for some programs, including things like full XML parsers to support a feature I never plan to use.
Javascript was one of the first languages to highlight this problem with things like left-pad, but the xz backdoor showed that it's also perfectly possible to do the same attack on highly-audited programs written in a system language that doesn't even have a package manager.
Cargo made its debut in 2014, a year before the infamous left-pad incident, and three years before the first large-scale malicious typosquatting attacks hit PyPI and NPM. The risks were not as well-understood then as they are today. And even today it is very far from being a solved problem.
To me a rich stdlib is not a selling point. Both ecosystems have a ton of very high-quality libraries.
That's a damning indictment of Rust. Something as big as Chrome has IIRC a few thousand dependencies. If a simple password manager CLI has hundreds, something has gone wrong. I'd expect only a few dozen
How many are third-party?
Frustratingly, they're not by default though; you need to explicitly use `--locked` (or `--frozen`, which is an alias for `--locked --offline`) to avoid implicit updates. I've seen multiple teams not realize this and get confused about CI failures from it.
The implicit update surface is somewhat limited by the fact that versions in Cargo.toml implicitly assume the `^` operator on versions that don't specify a different operator, so "1.2.3" means "1.2.x, where x >= 3". For reasons that have never been clear to me, people also seem to really like not putting the patch version in though and just putting stuff like "1.2", meaning that anything other than a major version bump will get pulled in.
Not quite: "1.2.3" = "^1.2.3" = ">=1.2.3, <2.0.0" in Cargo [0], and "1.2" = "^1.2.0" = ">=1.2.0, <2.0.0", so you get the "1.x.x" behavior either way. If you actually want the "1.2.x" behavior (e.g., I've sometimes used that behavior for gmp-mpfr-sys), you should write "~1.2.3" = ">=1.2.3, <1.3.0".
[0] https://doc.rust-lang.org/cargo/reference/specifying-depende...
From thinking it through more closely, it does actually seem like it might be a little safer to avoid specifying the patch version; it seems like putting 1.2.3 would fail to resolve any valid version in the case that 1.2.2 is the last non-yanked version and 1.2.3 is yanked. I feel like "1.2.3" meaning "~1.2.3" would have been a better default, since it at least provides some useful tradeoff compared to "1.2", but with the way it actually works, it seems like putting a full version with no operator is basically worse than either of the other options, which is disappointing.
If however `Cargo.toml` has changed then `cargo build` will have to recalculate the lockfile. Hence why it can be useful to be explicit about `cargo build --locked`.
Do you think it's an actively bad practice, completely benign, or something in between where it makes sense in some cases but probably should still be avoided in others? Offhand, the only variable I can think of that might influence a different choice is that maybe closed-source packages been reused within a company (especially if trying to interface with other package management systems, which I saw firsthand when working at AWS but I'm guessing is something other large companies would also run into), but I'm curious if there are other names nuances I haven't thought of
It’s not exactly a tough nut to crack: it changed 2-ish years ago after guidance (and cargo’s defaults) changed: https://blog.rust-lang.org/2023/08/29/committing-lockfiles/
Some people might argue that changing a function to return an error where it didn't previously would be a breaking change; I'd argue that those people are wrong about what semver means. From what I can tell, people having their own mental model of semver that conflicts with the actual specification is pretty common. Most of the time when I've had coworkers claim that semver says something that actively conflicts with what it says, after I point out the part of the spec that says something else, they end up still advocating for what they originally had said. This is fine, because there's nothing inherently wrong with a version schema other than semver, but I try to push back when the term itself gets used incorrectly because it makes discussions much more difficult than they need to be.
No wonder...
The problem is that you also want to update deps.
Or, conversely, encourage programming languages to increase the number of features in their standard libraries.
It's hard for me to take seriously any suggestion that .NET is a model for how ecosystems should approach dependency management based on that, but I guess having an abysmal experience when there are dependencies is one way to avoid risks. (I would imagine it's probably not this bad on Windows, or else nobody would use it, but at least personally I have no interest in developing on a stack that I can't expect to work reliably out of the box Linux)
But PSA: If something is critical to the business and you’re using npm, pin your dependencies. I’ve had this debate with other devs throughout the years and they usually point to the lockfile as assurance, but version ranges with a ^ mean that when the lockfile gets updated, you can pull in newer versions you didn’t explicitly choose.
If what you're building can put your company out of business it's worth the hassle.
We have things like dependebot for this.
https://docs.github.com/en/code-security/tutorials/secure-yo...
Security patches aren't like bugs or features where you can just roll a new version. Often patches need to be backported to older versions allowing software and libraries to be "upgraded" in place with no other change introduced.
Say you had software that controlled the careful mix of chemicals introduced into a municipal water supply. You just don't move from version 1.4 to 3.2, you fix 1.4 in place.
What you're looking for are Debian stable packages. :p
Yes, if they all just backport security patches we'll be fine. No, people are not going to just.
I promptly removed the bw cli programme after that, and I definitely won't be installing it again.
I use ghostty if it matters.
I found the default bwcli clunky and unacceptable, and it's why I don't use it, even though I still have a BitWarden subscription.
I can't think of a plausible explanation for how bw is at fault for its terminal output ending up, across a ssh session and tmux invocation, in the chat history of weechat. Even if bw auto-copied its output to the clipboard (which as far as I could tell by glancing at the cli options, it doesn't and can't), and the clipboard is auto-copied to remote hosts, clipboard contents shouldn't appear in an irc client's history without explicit hacking to do that.
The claim is just noise, particularly because it doesn't seem to have ever been investigated.
It seems prudent, if someone wants to use a cli, to use rbw rather than bw, or even just pass or keypassxc-cli (and self-managed cloud backup or syncing). However, that's based on bw being a javascript mess, not based on the unlikely event of bw injecting its output through ssh into irc clients.
> I believe it was `bw list` that I ran, assuming it would list the names of all my passwords, but too my surprise, it listed everything, including passwords and current totp codes.
This issue is cleary bitwarden's issue, and is an insane design that's extremely unfriendly. I just searched again and apparently, yes, `bw list` just dumps all the plaintext passwords out to the terminal! Doing an `ls` on a directory doesn't dump all the file contents, doing `list` should not reveal the secrets everywhere, and a design that includes dumping all passwords in plaintext from a listing is frankly panic inducing. I always take care not to cat secret key material to the screen, and even try to avoid piping it places.
Whatever else happened after having your entire password vault dumped to a terminal screen is probably unconnected to `bw` in any way, and 1024kb doesn't blame bitwarden for that directly, and says "I have no idea how this happened, but it was quite terrifying." which doesn't blame `bw` for the copying. The sin was dumping everything to the terminal.
Data on a terminal screen should be easy to be slung around, that's the entire point of a terminal screen. So it should be very hard to dump all your secrets to the terminal, there shouldn't even be a "dump all plaintext passwords to stdout" without some serious `--yes-i-mean-it` flags, much less the most basic command one can imagine using when trying to look up the name of a secret.
The full strength of the SOP applies by default. CORS is an insecurity feature that relaxes the SOP. Unless you need to relax the SOP, you shouldn't be enabling CORS, meaning you shouldn't be sending an Access-Control-Allow-Origin header at all.
If your front-end at www.example.com makes calls to api.example.com, then it's simple enough to just add www.example.com to CORS.
So I do local dev on https://local.qa.yourappnamehere.com
> no synchronized password manager is safe
Care to elaborate? I'd agree that the security/availability tradeoff is different, but "not safe" is as nonsensical a blanket statement as "all/only offline/paper-based/... password managers are safe".
There is a time and place for where it makes sense and a password manager CLI written in TypeScript importing hundreds of third-party packages is a direct red flag. It is a frequent occurrence.
We have seen it happen with Axios which is one of the biggest supply chain attacks on the Javascript / Typescript ecosystem and it makes no sense to build sensitive tools with that.
But how else are you going to check if a number is even or odd? Remember, the ONLY design goal is not repeating yourself (or in fact anything anyone has ever thought of implementing).
They probably caused it themselves, somehow, and then blamed bitwarden. Note in the original comment they aren't even entirely sure what the command was, and they weren't familiar with it or they wouldn't have been surprised by its output... so how can they be sure what else they did between that command and the weechat thing?
If the terminal or tmux fed terminal history into weechat, that's also not bw's problem.
I know this because I had the same surprised reaction
Quite bizarre to think much much of my well-being depends on those secrets staying secret.
If you're used to the clunkier workflow of copy-pasting from a separate app, then it's much easier to absent-mindedly repeat it for a not-quite-right url.
I have 1Password configured to require password to unlock once per 24 hours. Rest of the time I have it running in the background or unlock it with TouchID (on the MacBook Pro) or FaceID (on the iPhone).
It also helps that I don’t really sign into a ton of services all the time. Mostly I log into HN, and GitHub, and a couple of others. A lot of my usage of 1Password is also centered around other kinds of passwords, like passwords that I use to protect some SSH keys, and passwords for the disk encryption of external hard drives, etc.
Also a great way of missing out on one of the best protections of password managers; completely eliminating phishing even without requiring thinking. And yes, still requires you to avoid manually copy-pasting without thinking when it doesn't work, but so much better than the current approach you're taking, which basically offers 0 protection against phishing.
Rouge browser extensions for example could redirect you away from the bank website (if the bank website has poor security) when you go there, so even if you use the URL from the password manager, if you don't use the autofill feature, you can still get phished. And if the autofill doesn't show, and you mindlessly copy-paste, you'd still get phished. It's really the autofill that protects you here, not the URL in the password manager.
What? Are you not talking about browser bookmarks? They don't change because the target website starts redirecting somewhere, at least not the browsers I typically use.
Concretely, I think for redirect browser extension users I'd use "webRequest" permission, while for in page access you'd need a content-script for specific pages, so in practice they differ in what the extension gets access to.
Likewise I have links in the bookmarks bar on desktop.
I use these links to navigate to the main sites I use. And log in from there.
I don’t really need to think that way either.
But I agree that eliminating the possibility all-together is a nice benefit of using the browser integration, that I am missing out on by not using it.
It's better, but calling it so much better [that it's unreasonable to forgo the browser extension] is a bit silly to me.
1. Go to website login page
2. trigger the global shortcut that will invoke your password manager
3. Your password manager will appear with the correct entry usually preselected, if not type 3 letters of the site's name.
4. Press enter to perform the auto type sequence.
There, an entire class of exploits entirely avoided. No more injecting third party JS in all pages. No more keeping an listening socket in your password manager, ready to give away all your secrets.
The tradeoff? You now have to manually press ctrl+shift+space or whatever instead when you need to log in.
When you use autofill, the native application will prompt to disclose credentials to the extension. At that point, only those credentials go over the wire. Others remain inaccessible to the extension.
(plug: released a small CLI to auto-configure these — https://depsguard.com — I tried to find something that will help non developers quickly apply recommended settings, and couldn't find one)
We need to either screen everybody or cut of countries like North Korea and Iran from the Internet.
My two most precious digital possessions - my email and my Bitwarden account - are protected by a Yubikey that's always on my person (and another in another geographical location). I highly recommend such a setup, and it's not that much effort (I just keep my Yubikey with my house keys)
I got a bit scared reading the title, but I'm doing all I can to be reasonably secure without devolving into paranoia.
Maybe the web vault, but then we do not know when it's compromised (that's the whole idea); so we trust them not to've made a mess...
tl;dr
- https://cooldowns.dev
- https://depsguard.com
(disclaimer: I maintain the 2nd one, if I knew of the first, I wouldn't have released it, just didn't find something at that time, they do pretty much the same thing, mine in a bit of an overkill by using rust...)
That password cannot be cracked because it will always display as ** for anyone else.
My password is *****. See? It shows as asterisks so it's totally safe to share. Try it!
... Scnr •́ ‿ , •̀
So bold and so cowards at the same time...
Is this a serious question?
It's highly unlikely that the people behind an attack like this would come out (non-anonimously) and take credit. And it's unlikely they'll be caught. So does it matter to most peoplee if it's Russians, Americans, Iranians, North Koreans, or some other country?
If you're a 3-letter agency, you'd want to know and potentially arrest them, but as a random guy on the internet, or even a maintainer, I really don't think it matters.
Why would you steal the key when you're already in the house ?
And for the high profile, like some Iranian scientist who has the code to something important, they wouldn't use things like bitwarden.
I really see no use case when the nsa would need access to your bitwarden vault.
Not really, we already know that NSA attempts shit like this all the time, if that came out, it'd be the same as the Snowden leaks meaning, a bunch of nerds going "Huh, who could have predicted this?". I don't see the point in it being Russia, China or the US, I'd like it as much if the US did it as Russia, so that's why I asked why it matters.
for threat intel people, a lot.
obvious misdirection, but it does serve to make it very obvious it was a state actor.
Lol no, lots of groups do this, non-state ones too.
I've managed to avoid several security breaches in last 5 years alone by using KeePass locally on my own infra.
Bitwarden vaults were not compromised, there was a problem in a tool you used to access the secrets.
What makes it impossible for KeePass access tools to have these issues?
the superiority of keepass users scares away the bad actors
I'd say since it is a local only tool, you don't really need to update it constantly provided you are a sane person that don't use a browser extension. It makes it easier to audit and yourself less at risk of having your tool compromised.
It doesn't have to be keypass though, it can be any local password management tool like pass[1] or its guis or simply a local encrypted file.
[1] https://www.passwordstore.org/
Browser password extension are just percieved convinence over security.
The browser should stay isolated and seperate from anything on the device instead of integrating "dog doors" in the software with the no1 biggest attack surface of any modern device.
In any case, the fact that the official BitWarden client (which uses Electron btw) and even the CLI is written in Javascript/Typescript - should tell you everything you need to know about their coding expertise and security posture.
If someone links me to "rnicrosoft.com" with a perfectly cloned login page, my eyes might not notice that it's a phishing link, but my browser extension will refuse to autofill, and that will cause me to notice.
Phishing is one of the most common attacks, and also one of the easiest to fall for, so I think using the browser extension is on-net more secure even though it does increase your attack surface some.
I know proper 2fa, like webauthn/fido/yubikeys, also solves this (though totp 2fa does not), but a lot of the sites I use do not support a security key. If all my sites supported webauthn, I think avoiding the browser extension would be defensible.
Sure there may be existence of typosquatting here and there but they tend to be much easier to spot vs the phising url using unicode variants.
If I ever need to fill the login, I just do any of these:
- KeepassXC has auto-type feature, so I just choose the needed one and let it auto-type - I enable the extension only when I need to log in and choose the one I need to fill (not auto-fill, but only fill when I click on the account from the extension pop-up dashboard).
By the way, syncthing can manage conflicts by keeping one copy of the file with a specific name and date. You can also decide is one host is the source of truth.
I'd go further than that and say for me personally, the fact it's just a file is a selling point, not a "good enough" concession. I can just put passwords.kdbx alongside my notes.txt and other files (originally on a thumbdrive, now on my FTP server) - no additional setup required.
There will be people who use multiple devices but don't already have a good way to access files across them, but even then I'm not fully convinced that SaaS specifically for syncing [notes/passwords/photos/...] really is the most convenient option for them opposed to just being a well-marketed local maximum. Easy to add one more subscription, easy to suck it up when terms changes forbid you syncing your laptop, easy to pray you're not affected by recurring breaches, ... but I'd suspect often (not always) adds up to more hassle overall.
To date there have been zero instances when I needed to significantly change a password/service/login/credential solely from my phone and I was unable to access my laptop.
Additionally the file gets synchronized to a workstation that sits in my home office accessible by personal VPN, where it can be accessed in a shell session with the keepass CLI: https://tracker.debian.org/pkg/kpcli
You can use an extremely wide variety of your own choice of secure methods for how to get the file from the primary workstation (desktop/laptop) to your phone.
Plus, now you're responsible for everything. Backups, auditing etc.
Password habits for many people are now decades-old, and very difficult to break.
So gradually I don't feel I need syncing that much any more and switched to Keepass. I made my mind that I'll only change the database from my computer and rclone push that to any cloud I like (I'm using Koofr for that since it's friendly to rclone) then in any other devices I'll just rclone pull them after that when needed. If I change something in other devices (like phones), I'll just note locally there and change the database later.
But ofc if someone needs to change their data/password frequently then Bitwarden is clearly the better choice.
https://cyberpress.org/hackers-exploit-keepass-password-mana...
This wasn't a case where KeePass was compromised in any way, as far as I can tell. This appears to be a basic case of a threat actor distributing a trojanized version via malicious ads. If users made sure they are getting the correct version, they were never in danger. That's not to say that a supply chain attack couldn't affect KeePass, but this article doesn't say that it has.
Long term keepass users aren't going to be affected. If you mention software to others make sure you send them a link to a known safe download location instead of having them search for one (as new users searching like that are more at risk of stumbling on a malicious copy of the official site hosting a hacked version).
It's only a matter of time until _they_ are also popped :(.
> The beacon established command and control over HTTPS
Once the compromise point is preinstall, the usual "inspect after install" mindset breaks down. By then the payload has already had a chance to run.
That gets more interesting with agents / CI / ephemeral sandboxes, because short exposure windows are still enough when installs happen automatically and repeatedly.
Another thing I think is worth paying attention to: this payload did not just target secrets, it also targeted AI tooling config, and there is a real possibility that shell-profile tampering becomes a way to poison what the next coding assistant reads into context.
I work on AgentSH (https://www.agentsh.org), and we wrote up a longer take on that angle here:
https://www.canyonroad.ai/blog/the-install-was-the-attack/
And besides, you could always pull the package and inspect before running install, which unless you really know the installer and understand/know guarantees deeply (e.g., whether it’s possible for an install to deploy files outside of node_modules) it’s insane to even vaguely trust it to pull and unpack potentially malicious code.
https://mrshu.github.io/github-statuses/
Keep the password manager as a separate desktop app and turn off auto update.
The original pass is just a single shell script. It's short, pretty easy to read and likely in part because it's so simple, it's also very stable. The only real dependencies are bash, gnupg and optionally git (history/replication). These are most likely already on your machine and whatever channel you're getting them from (ex: distribution package manager) should be much more resilient to supply chain vulnerabilities.
It can also be used with a pgp smartcard (in my case a Yubikey) so all encryption/decryption happens on the smartcard. Every attempt to decrypt a credential requires a physical button press of the yubikey, making it pretty obvious if some malware is trying to dump the contents of the password store.
I wrote a version in Python and then rust back before the official CLI was released. Now you can use https://github.com/doy/rbw instead, much better maintained (since I don't use Bitwarden anymore).
The practical differences to me:
All that said, I still happily recommend BW, especially for people that are cost-conscious, the free BW plan is Good Enough for most everyone.Security wise, they are equivalent enough to not matter.
How many times will this happen before people realise that updating blind is a poor decision?
I don’t even trust GitHub’s own actions. I used to use only the one to checkout a repository, limited to a specific tag, but then realised that even a tagged version, if it has dependencies which are not themselves tagged, could be compromised, so I stopped and now do the checkout myself. It’s not even that many lines of code, the fact GitHub has a huge npm package with dependencies to do something this basic is insane.
Edit: The CLI itself apparently does not, which will have limited the damage a bit, but if it's installed as a snap, it might. Incidents like this should hopefully cause a rollback of this dumb system of forcefully and frequently updating people's software without explicit consent.
Also the time range provided in https://community.bitwarden.com/t/bitwarden-statement-on-che... can help with knowing if you were at risk. I only used the CLI once in the morning yesterday (ET), so I might not have been affected?
Assuming you had it already installed, you would be safe.
I've purged the snap. Really should purge snapd completely.
https://github.com/raycast/extensions/blob/6765a533f40ad20cc...
ActionPin — a GitHub Actions hardening checker that flags unpinned third-party actions, overbroad workflow permissions, install scripts that touch secrets, and agent-triggered jobs that can reach production credentials. ActionPin host on github.
I've also been preferring to roll things on my own in my side projects rather than pulling a package. I'll still use big, standalone libraries, but no more third-party shims over an API, I'll just vibe code the shim myself. If I'm going to be using vibe code either way, better it be mine than someone else's.
Aside from passwords, I store passkeys, secure notes, and MFA tokens.
It is mind boggling how an app that just lists a bunch of items can be so bloated.
We recently adopted it at work, and I find the thing to just produce garbage. I've never tuned out noise so quickly.
you have to appreciate the irony of a thing that's supposed to help protect you from vulnerabilities being one.
That thing is expensive as he'll and used by lots of huge corps. I know at least one very large one in Mexico ... where the IT team is pretty useless.
So, I dont doubt the possibility that in the short future we will hear about more hacks.
> Bitwarden’s Chrome extension, MCP server, and other legitimate distributions have not been affected yet.
https://github.com/remorses/sigillo
The irony! The security "solution" is so often the weak link.
Meanwhile, Bitwarden themselves state that end users were almost never affected: https://community.bitwarden.com/t/bitwarden-statement-on-che...
You had to install the CLI through NPM at a very short time frame for it to be affected. If you did get infected, you have to assume all secrets on your computer were accessed and that any executable file you had write access to may be backdoored.
It is:
- open source
- accountless(keys are identity)
- using a public git backend making it easily auditable
- easy to self host, meaning you can easily deploy it internally
- multisig, meaning event if GitHub account is breached, malevolent artifacts can be detected
- validating a download transparantly to the user, which only requires the download url, contrary to sigstore
I wrote my own password generator - it's stateless, which has the advantage that I never have to back up or sync any data between devices. It just lets you enter a very long, secure master password, service name and a username then runs an scrypt hash on this with good enough parameters to make brute-force attacks unfeasible.
For anything important, I also use 2FA.
Also didn't Microsoft (the owner of GitHub) got access to Claude Mythos in order to "seCuRe cRitiCal SoftWaRe InfRasTructUre FoR teh AI eRa"? Hows securing GitHub Action going for them?
I lean toward cooldown by default, and bypass it when an actual reachable exploitable ZeroDay CVE is released.
"a malicious package that was briefly distributed"
"investigation found no evidence that end user vault data was accessed or at risk"
"The issue affected the npm distribution mechanism for the CLI during that limited window, not the integrity of the legitimate Bitwarden CLI codebase or stored vault data."
"Users who did not download the package from npm during that window were not affected."
Downplaying so hard it's disgusting. Bitwarden failed and became a vector of attack. A vendor who is responsible for all my passwords. What a joke. All trust lost: by the incident and comms-style. Time to move before they make an even bigger mistake.
Praying to the security gods.
It seems like we've have non-stop supply chain attacks for months now?
on a more serious note. i told you so levels reaching new heights. dont use password managers. dont handoff this type of risk to a third party.
its like putting all your keys in a flimsy lockbox outside of your appartment. at some point someone will trip over it, find the keys and explore -_-.
it being impractical with the amount of keys/passwords you need to juggle?
not an excuse. problem should and can be solved differently.
> Defend against hackers and data breaches
> Fix at-risk passwords and stay safe online with Bitwarden, the best password manager for securely managing and sharing sensitive information.
yep. literally from their website this moment..and the link to their "statement"[0] is nowhere on the front page.
Oh wait, there is a top banner..."Take insights to action: Bitwarden Access Intelligence now available Learn more >" nope.
[0]: https://community.bitwarden.com/t/bitwarden-statement-on-che...
If you see any package that has hundreds of libraries, that increases the risk of a supply chain attack.
A password manager does not need a CLI tool.
[0] https://news.ycombinator.com/item?id=47585838
A password manager absolutely does need a CLI tool??
Why not? Even macos keychain supports cli.
I don't think macOS Keychain uses NPM and it isn't in TypeScript or Javascript and, yes it does not need a CLI either.
The NPM and Java/Typescript ecosystem is part of the problem that encourages developers to import hundreds of third-party libraries, due to its weak standard library which it takes at least ONE transitive dependency to be compromised and it is game over.
You still have not said why this is an issue of having a CLI.
I complained about both. What does this say from the start?
>> Once again, it is in the NPM ecosystem.
> You still have not said why this is an issue of having a CLI.
Why do you need one? Automation reasons? OpenClaw? This is an attractive way for an attacker to get ALL your passwords in your vault. The breach itself if run in GitHub Actions would just make it a coveted target to compromise it which makes having one worse not better and for easier exfiltration.
So it makes even more sense for a password manager to not need a CLI at all. This is even before me mentioning the NPM and the Javascript ecosystem.
I need one because I am not always using a graphical interface. What exactly in a GUI do you think makes it harder/less attractive for an attacker?
If the GUI code is compromised in the same way as the CLI, it'll have the same level of access to your vault as soon as you enter your master password, exactly the same as in the CLI.
JS is a target of these dumb accusations because it's literally the best cross-platform way to ship apps. Stop inventing issues where there are none.
The point is the risk is far higher with more dependencies as I said from the very start. But it happens much more frequently in the NPM ecosystem than in others.
> If you are advocating developing without dependencies at all, then please start (with any language) and show us all how much you actually ship.
The languages in the former (especially Go) encourages you to use the standard library when possible. Javascript / TypeScript does not and encourages you to import more libraries than you need.
> JS is a target of these dumb accusations because it's literally the best cross-platform way to ship apps. Stop inventing issues where there are none.
Nope. It is a target because of the necessity for developers to import random packages to solve a problem due to its weak standard library and the convenience that comes with installing them.
You certainly have a Javascript bias towards this issue yourself and there is clearly a problem and you ignoring it just makes it worse.
If it wasn't an issue, we would not be talking about yet another supply chain attack in the NPM ecosystem.
Not to mention that a graphical application is just as vulnerable to supply chain attacks.
That's a wild statement. The CLI is just another UI.
The problem in this case is JS and the NPM ecosystem. Go would be an improvement, but complexity is the enemy of security. Something like (pass)age is my preference for storing sensitive data.