NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Bcachefs removed from the mainline kernel (lwn.net)
danw1979 7 hours ago [-]
As a long time sponsor of Kent's on Patreon - $10 a month since 2018, $790 total - I've found this bcachefs debacle really depressing.

I'm not even a bcachefs user, but I use ZFS extensively and I _really_ wanted Linux to get a native, modern COW filesystem that was unencumbered by the crappy corporate baggage that ZFS has.

In the comments on HN around any bcachefs news (including this one) there are always a couple throwaway accounts bleating the same arguments - sounding like the victim - that Kent frequently uses.

To Kent, if you're reading this:

From a long time (and now former) sponsor: if these posts are actually from you, please stop.

Also, it's time for introspection and to think how you could have handled this situation better, to avoid having disappointed those who have sponsored you financially for years. Yes, there are some difficult and flawed people maintaining the kernel, not least of which Linus himself, but you knew that when you started.

I hope bcachefs will have a bright future, but the ball is very clearly in your court. This is your problem to fix.

(I'm Daniel Wilson, subscription started 9th August 2018, last payment 1st Feb 2025)

webstrand 6 hours ago [-]
I am also a Patreon supporter, and I intentionally didn't switch to bcachefs until it was merged into the kernel. After all, Linus would never break userspace right?

I am also frustrated by this whole debacle, I'm not going to stop funding him though Bcachefs is a solid alternative to btrfs. It's not at all clear to me what really happened to make all the drama. A PR was made that contained something that was more feature-like than bugfix-like, and that resulted in a whole module being ejected from the kernel?

I really wish, though that DKMS was not such a terrible a solution. It _will_ break my boot, because it always breaks my boot. The Linux kernel really needs a stable module API so that out-of-tree modules like bcachefs are not impossible to reliably boot with.

LeFantome 5 hours ago [-]
I also waited until bcachefs was in the mainline. And I have been loving it. In fact, I even have multiple systems using it as root. Rock solid.

DMKS is not going to work for me though. Some of the distros I use do not even support it. Chimera Linux uses ckms.

As for how we got here, It was not just one event. Kent repeatedly ignored the merge window, submitting changes too late. This irked Linux. When Linus complained, Kent attacked him. And Kent constantly ran down the LKML and kernel devs by name (especially the btrfs guys). This burned a lot of bridges. When Linus pushed Kent out, many people rushed to make it permanent instead of rushing to his defense. Kent lost my support in the final weeks by constantly shouting that he was a champion for his users while doing the exact opposite of what I wanted him to do. I want bcachefs in the kernel. Kent worked very hard to get it pushed out.

It really is a great file system though.

koverstreet 2 hours ago [-]
I need to write up a proper patreon post on all this stuff, because there's a lot of misinformation going around.

No, I was not "ignoring the merge window". Linus was trying to make and dictate calls on what is and is not a critical bugfix, and with a filesystem eating bug we needed to respond to, that was an unacceptable situation.

1oooqooq 49 minutes ago [-]
this crowd is wild. the author answer is voted down :)

edit: by the time i commented it was already dark text. guess it recovered.

webstrand 5 hours ago [-]
yeah I'm probably going to have to start building my own kernel
stycznik 5 hours ago [-]
>It's not at all clear to me what really happened to make all the drama. A PR was made that contained something that was more feature-like than bugfix-like, and that resulted in a whole module being ejected from the kernel?

This isn't just a one time thing, speaking as someone who follows the kernel, apparently this has been going on pretty much since bcachefs first tried to get into Linus's tree. Kent even once told another kernel maintainer to "get your head examined" and was rewarded with a temporary ban.

Edit: To be fair, the kernel is infamous for being guarded by stubborn maintainers but really I guess the lesson to be learned here is if you want your pet project to stick around in the kernel you really can't afford to be stubborn yourself.

LeFantome 5 hours ago [-]
> this project sorely needs more than one person involved.

I hold out some hope that somebody else will get involved in bcachefs and that the new person will be able to resubmit bcachefs to the mainline.

My impression is that many people respect the technology and would be happy to have it back--they just cannot work with Kent.

But that is the reasons this will probably not happen. It does not appear that anybody wants to work with Kent.

danw1979 5 hours ago [-]
> I guess the lesson to be learned here is if you want your pet project to stick around in the kernel you really can't afford to be stubborn yourself.

Amen.

And to your point about it being a "pet project" - I'm sure I could go look at the commit history, but is anyone other than Kent actually contributing meaningfully to bcachefs ? If not, this project sorely needs more than one person involved.

thayne 4 hours ago [-]
> A PR was made that contained something that was more feature-like than bugfix-like, and that resulted in a whole module being ejected from the kernel?

From what I can tell, what happened is that was just a trigger for a clash of personalities between Linus and Kent, both of whom have a bit of a temper, and refused to back down, which escalated to this.

ChocolateGod 5 hours ago [-]
> After all, Linus would never break userspace right?

But bcachefs never lived in userspace even before it was merged

boroboro4 5 hours ago [-]
In my opinion any family lives in user space, through a implicit contract of filesystems and data stored on disk?
ChocolateGod 4 hours ago [-]
Applications do not talk to the filesystem directly, they talk to the generic I/O syscalls on the kernel which handles the internal filesystem calls.

Those generic syscalls are (supposed to) don't change, the internal filesystem calls can and do change.

This is one reason why ZFS regularly breaks, on top of it can't use GPL exports.

cesarb 3 hours ago [-]
> Applications do not talk to the filesystem directly,

Sometimes, they do. For instance, BTRFS_IOC_CLONE to do a copy-on-write clone of a file's contents (now promoted to other filesystems as FICLONE, but many other ioctl operation codes are still btrfs-specific; and other filesystems have their own filesystem-specific operations).

yjftsjthsd-h 5 hours ago [-]
> I intentionally didn't switch to bcachefs until it was merged into the kernel. After all, Linus would never break userspace right?

It was explicitly marked experimental.

webstrand 5 hours ago [-]
Experimental, to me, means "it might eat your data, have backups" not "we might decide to remove this module from the kernel for non-technical reasons, good luck users"

Reiserfs sat in the kernel for years after he went to prison and didn't get removed on such short notice even though it was equally if not more unmaintained.

yjftsjthsd-h 5 hours ago [-]
> Experimental, to me, means "it might eat your data, have backups" not "we might decide to remove this module from the kernel for non-technical reasons, good luck users"

I don't think the kernel devs share that definition.

> Reiserfs sat in the kernel for years after he went to prison and didn't get removed on such short notice even though it was equally if not more unmaintained.

Reiserfs wasn't marked experimental.

webstrand 4 hours ago [-]
Although there are "experimental" labels on features in the kernel, there's no coherent definition for what that means. Historically most distros have enabled CONFIG_EXPERIMENTAL features by default, until the flag was deemed meaningless and removed. Linux has never removed a mainline-merged feature this quickly. Features in staging may be removed quickly, but bcachefs was merged into mainline. The bcachefs removal, not to mention the removal timeline, is unprecedented for a mainline-merged feature. There's never been any expectation that experimental means it may be removed on short notice.

Going forward, I'll agree with you: mainline does not care about users of experimental features. But it's disingenuous to suggest that this has always been the expectation.

Conan_Kudo 60 minutes ago [-]
The biggest mistake is not having a staging subtree for filesystems like we do for most other drivers.
yjftsjthsd-h 54 minutes ago [-]
Oh, that's interesting. Is there any reason it couldn't have just gone under drivers/staging?
Conan_Kudo 49 minutes ago [-]
I think it's mostly a policy thing? I brought it up a few years ago and the fs developers were not very enthused about the idea.
spoaceman7777 7 hours ago [-]
What about btrfs?

Seems to tick all of the boxes in regard to what you're looking for, and its mature enough that major linux distros are shipping with it as the default filesystem.

pfexec 6 hours ago [-]
Because every time btrfs is mentioned, 5 more people come out of the woodwork saying that it irreparably lost all their data. Sorry but there's just too many stories for it to be mere coincidences.

Your statement is misleading. No one is using btrfs on servers. Debian and Ubuntu use ext4 by default. RHEL removed support for btrfs long ago, and it's not coming back:

> Red Hat will not be moving Btrfs to a fully supported feature. It was fully removed in Red Hat Enterprise Linux 8.

accelbred 6 hours ago [-]
AFAIK, Facebook uses BTRFS on their servers.
mdedetrich 38 minutes ago [-]
They do, but this is misleading due to a number of caveats

First one is that they don't use btrfs own RAID (aka btrfs-raid/volume management). They actually use hardware RAID so they don't experience any of the stability/data integrity issues people experience with btrfs-raid. Ontop of this, facebooks servers run in data centers that have 100% electricity uptime (these places have diesel generators for backup electricity)

Synology likewise offers btrfs on their NAS, but its underneath mdadm (software RAID)

The main benefit that Facebook gets from btrfs is transparent compression and snapshots and thats about it.

ChocolateGod 5 hours ago [-]
In a scenario where they don't have to worry about data going poof because it's used to run stateless containers (taking advantage of CoW to reduce startup time etc)
simtel20 2 hours ago [-]
For a long time they were running MySQL on it iirc (outsider, just asked at meetups etc )
2 hours ago [-]
reissbaker 5 hours ago [-]
Ex-Meta employee here, and yup — this is true.
o11c 6 hours ago [-]
And they almost always 'forget' to mention "that was in 2010" or "I was using the BTRFS feature marked 'do not use, unstable'".

It's really difficult to get a real feel for BTRFS when people deliberately omit critical information about their experiences. Certainly I haven't had any problems (unless you count the time it detected some bitrot on a hard drive and I had to restore some files from a backup - obviously this was in "single" mode).

plqbfbv 4 hours ago [-]
My fairly recent experience with some timelines, posted 20d ago: https://news.ycombinator.com/item?id=45210911

Some of the most catastrophic ones were 3 years ago or earlier, but the latest kernel bug (point 5) was with 6.16.3, ~1 month ago. It did recover, but I already mentally prepared to a night of restores from backups...

jeltz 4 hours ago [-]
One of the PostgreSQL devs managed to corrupt Btrfs about 2 years ago when working on async IO. Is that recent enough?
danw1979 6 hours ago [-]
And also, I've read plenty enough about how hard it has been to maintain btrfs over the years. It's never really felt like the future.

Plus I needed zvols for various applications. I've used ZFS on BSD for even longer so when OpenZFS reached a decent level of maturity the choice between that and btrfs was obvious for me.

teiferer 5 hours ago [-]
The argument of zvols doesn't really fit in here Unless bcachefs supports them?
lupusreal 3 hours ago [-]
I know somebody is going to say otherwise, but BTRFS seems genuinely rock solid in single-disk setups. OpenSUSE defaults to it so I've been using it for years. No problems, it's not even something I worry about.
chasil 2 hours ago [-]
Allowing btrfs to run out of space is well known to do irreparable damage.

Keeping it healthy means paying close attention to "btrfs fi df" and/or "fi usage" for best results.

ZFS also does not react well to running out of space.

Conan_Kudo 58 minutes ago [-]
I've been running Btrfs on Fedora for a decade now (and it's been the default since 2020). I have basically never done any of those things and it's been fine. I've had to do more babysitting with my ZFS systems than I did my Btrfs ones.
newZWhoDis 6 hours ago [-]
Synology exclusively uses BTRFS afaik, and there aren't widespread stories of data loss with their products.
pfexec 6 hours ago [-]
Took 10 seconds to find:

https://philip.greenspun.com/blog/2024/02/29/why-is-the-btrf...

> We had a few seconds of power loss the other day. Everything in the house, including a Windows machine using NTFS, came back to life without any issues. A Synology DS720+, however, became a useless brick, claiming to have suffered unrecoverable file system damage while the underlying two hard drives and two SSDs are in perfect condition. It’s two mirrored drives using the Btrfs file system

zejn 5 hours ago [-]
Synology does not use vanilla btrfs, they use a modified btrfs that runs over mdraid mirror, which somehow communicates with btrfs layer to supposedly fix errors, when they occur. It's not clear how far behind that fork is.
ksec 5 hours ago [-]
Synology are still shipping kernel 5.10 on their latest model. And 4.4 only a few years prior.

I am hoping we will get ZFS from Ubnt NAS via update.

mdedetrich 37 minutes ago [-]
Thats because they use mdadm for the RAID, the btrfs sits underneath a virtual mdadm volume ;)
bakugo 3 hours ago [-]
Not really data loss per se, but let me add my own story to the pile: just last week, I had a btrfs filesystem error out and go permanently read-only simply due to the disk becoming full. Hours of searching and no solution to be found, had to be reformatted.

I don't understand how btrfs is considered by some people to be stable enough for production use.

Macha 6 hours ago [-]
I think a lot of people interested in bcachefs were people who had lost faith in btrfs.
slashdave 6 hours ago [-]
Isn't OpenSUSE still shipping with btrfs on by default?
thereisnospork 5 hours ago [-]
The rolling "tumbleweed" variant does, afaict there aren't many issues related to btrfs[0]. Most problems I see seem to be Nvidia drivers or something choking during the update process (bad mirrors, odd package not updating, etc.).

[0]I'm currently evaluating OpenSuse as a possible W11 replacement, but not using it for anything serious atm.

yjftsjthsd-h 5 hours ago [-]
I used to run tumbleweed with btrfs, then it lost its root filesystem twice, now I distrust btrfs.
thayne 6 hours ago [-]
> I _really_ wanted Linux to get a native, modern COW filesystem

Doesn't btrfs fit that description? I know there are some problems with it, but it is definitely a native COW filesystem, abd AFAIK it is "modern".

webstrand 6 hours ago [-]
Btrfs has a "happy path" so long as you don't use any features outside of the happy path, your data will generally be fine. Outside of that, your data is less reliably fine.

Btrfs also has issues with large numbers of snapshots, you have to cull them occasionally or things begin to slow down, bcachefs does not.

teiferer 5 hours ago [-]
> I _really_ wanted Linux to get a native, modern COW filesystem

Btrfs not good?

(Honest question.)

rrauenza 4 hours ago [-]
I've been using btrfs for maybe 10 years now? -- on a single Linux home NAS. I use it in a raid1c3 config (I used to do c2). raid1cN is mirroring with N copies. I have compression on. I use snapshots rarely.

I've had a few issues, but no data loss:

* Early versions of btrfs had an issue where you'd run out of metadata space (if I recall). You had to rebalance and sometimes add some temporary space do that.

* One of my filesystems wasn't optimally aligned because btrfs didn't do that automatically (or something like that -- this was a long time ago.) A very very minor issue.

* Corruption (but no data loss, so I'm not sure it's corruption per se...) during a device replacement.

This last one caused no data loss, but a lot of error messages. I started a logical device removal, removed the device physically, rebooted, and then accidentally readded the physical device while it was still removing it logically. It was not happy. I physically removed the device again, finished the logical remove, and did a scrub and the fsck equivalent. No errors.

I think that's a testament to its resiliency, but also a testament how you can shoot yourself in the foot.

I've never used RAID5/6 on btrfs and don't plan to -- partly because of the scary words around it, but I also assume the rebuild time is longer.

bravetraveler 4 hours ago [-]
Funny to hear your success; I've managed to break almost every mirror I've entrusted to BTRFS! How? Holding down the power button!

Seemingly regardless of the drives, interface, or kernel, other filesystems paired with LVM or mdraid fail/recover/lie more gracefully. NVMe or SATA (spindles). Demonstrated back-to-back with replacements from different batches.

Truly disheartening, I want BTRFS. I would like to dedicate some time to this, but, well, time remains of the essence. I'm hoping it's something boring like my luck with boards/storage controllers, /shrug.

nolist_policy 1 hours ago [-]
Well, what are you waiting for? Get your findings to the btrfs-devel mailing list, include your drive make and model. Even better if it's reproducable.
bravetraveler 26 minutes ago [-]
I'll get right on that, boss. Already said: time. I'd like to spend more of mine triaging this before I waste that of others. Particularly the developers. I don't mind y'all so much :)

I posted that hoping someone might yield some insight, finding it might catch unsolicited advice. Hmm. I was passively reading/commenting, now you want work.

The problem: work, money, all compete for a limited amount of time. I'll spend it how I like, Square? Comments win over rigorous testing with my schedule, thanks.

Why don't you try to reproduce it? Better things to do, this isn't the mailing list? Exactly.

iamawacko 1 hours ago [-]
btrfs is good, but it's far from perfect. RAID 5 and 6 don't exactly work, it can have problems at high snapshot counts, and there's lots of even recent reports of corruption and other kinds of filesystem damage.

It feels more user friendly than ZFS, but ZFS is much more feature complete. I used to use btrfs for all my personal stuff, but honestly ext4 is just easier.

7 hours ago [-]
exploderate 8 hours ago [-]
The one line "article" on lwn.net has a link to this email:

  From: Kent Overstreet @ 2025-09-11 23:19 UTC
 
  As many of you are no doubt aware, bcachefs is switching to shipping as
  a DKMS module. Once the DKMS packages are in place very little should
  change for end users, but we've got some work to do on the distribution
  side of things to make sure things go smoothly.

  Good news: ...
https://lore.kernel.org/linux-bcachefs/yokpt2d2g2lluyomtqrdv...
doublerabbit 8 hours ago [-]
> Once the DKMS packages are in place very little should change for end users

Doesn't that mean I now have to enroll the MOK key on all my work workstations that use secure boot? If so that's a huge PITA on over 200 machines. As like with the NVIDIA driver you can't automate the facility.

yomismoaqui 7 hours ago [-]
Not to troll, I'm asking in good faith

Is this filesystem stable enough for deploying on 200 production machines?

From a cursory look I get things like this:

https://hackaday.com/2025/06/10/the-ongoing-bcachefs-filesys...

bravetraveler 6 hours ago [-]
I have to constantly adjust my comfort level regarding what 'production' means. Consider the prep conditions, or 'prod', for your typical Chef or Butcher!

Anyway, fair question IMO. Another point I'd like to make... migrating away from this filesystem, disabling secure boot, or leaning into key enrollment would be fine. Dealer's choice.

The 'forced interaction' for enrollment absolutely presents a hurdle. That said: this wouldn't be the first time I've used 'expect' to use the management interface at scale. 200 is a good warm up.

The easy way is to... opt out of secure boot. Get an exception if your compliance program demands it [and tell them about this module, too]. Don't forget your 'Business Continuity/Disaster Recovery' of... everything. Documents, scheduled procedures, tooling, whatever.

Again, though, stability is a fair question/point. Filesystems and storage are cursed. That would be my concern before 'how do I scale', which comparatively, is a dream.

doublerabbit 5 hours ago [-]
> The easy way is to... opt out of secure boot. Get an exception if your compliance program demands it.

Not going to happen. Secure Boot is a mandatory requirement in this scenario.

I can't talk further because NDA, but sure am confused by the downvotes for asking a question.

bravetraveler 4 hours ago [-]
Fair 'nuff; say no more -- I get it. Neat they don't mind a 'plugged' kernel, otherwise :) I find the situation interesting to say the least!

I'll hit this post positively in an attempt to counter the down-trend. edit: well, that was for squat.

LeFantome 5 hours ago [-]
I am not going to advocate to put bcachefs on 200 production machines.

However, I would like to push back on that article.

It says that bcachefs is "unstable" but provides no evidence to support that.

It says that Linus pushed back on it. Yes, but not for technical reasons but rather process ones. Think about that for a second though. Linus is brutal on technology. And I have never heard him criticize bcachefs technically except to say that case insensitivity is bad. Kind of an endorsement.

Yes, there have been a lot of patches. It is certainly under heavy development. But people are not losing their data. Kent submitted a giant list of changes for the kernel 6.17 merge window (ironically totally on time). Linus never took them. We are all using the 6.16 version of bcachefs without those patches. I imagine stories of bcachefs data loss would get lots of press right now. Have you heard any?

There are very few stories of bcachefs data loss. When I have heard of them, they seems to result in recovery. A couple I have seen were mount failures (not data loss) and were resolved. It has been rock-solid for me.

koverstreet 1 hours ago [-]
Eh? Linus has called it "experimental garbage that no one could be using" a whole bunch of times, based on absolutely nothing as far as I can tell.

Meanwhile just scan the thread for btrfs reports...

jandrese 6 hours ago [-]
Don't you only have to do that once per machine? After that the kernel should use the key you installed for every module that needs it. It is a pain in the ass for sure, but if you make it part of the deployment process it's manageable.

For sure it's a headache when you install some module on a whole bunch of headless boxes at once and then discover you need to roll a crash cart over to each and every one to get them booting again, but the secure boot guys would have it no other way.

7 hours ago [-]
patrakov 12 hours ago [-]
The end result is still positive. Before the mainline submission, Bcachefs could not be DKMSed, as it relied on changes in other subsystems, as opposed to just additions, so you had to compile your own kernel. Now, it is available as something that can be compiled as a module for any recent-enough third-party kernel.
aragilar 12 hours ago [-]
But presumably if said changes preventing DKMS usage were reasonable they would have been merged anyway independent of bcachefs, and likely with less drama and disruption? I'm not suggesting that there aren't some silver linings to the cloud, but it doesn't seem like the result is anywhere near neutral (let alone positive) for anyone involved.
arghwhat 6 hours ago [-]
Yes and no - the kernel interfaces only reflect what the kernel itself needs. It doesn't to my knowledge maintain interfaces for the purpose of enabling out-of-tree modules.

Changes would therefore need to be an improvement for in-tree drivers, and not merely something for an out-of-tree driver.

10 hours ago [-]
dev_l1x_be 9 hours ago [-]
Was this the reason they removed it from mainline?
masklinn 8 hours ago [-]
No, the reason they removed it from mainline is that the maintainer has proved (yet again) incapable of working with others if they can't get their way: https://lore.kernel.org/all/CAHk-=wi+k8E4kWR8c-nREP0+EA4D+=r...
odo1242 7 hours ago [-]
No, it was a conflict between Kent Overstreet and Linus Torvalds over Kent Overstreet constantly submitting patches too late in the merge window.
charcircuit 6 hours ago [-]
Submitting bug fixes for release candidates is not too late in the merge window. That's why they are a release candidate and not the final release.
cwillu 4 hours ago [-]
Slipping feature code in with bug fixes is a abuse of good faith.
charcircuit 3 hours ago [-]
Not every bug fix is going to be a trivial one line change. Some fixes are going to be more involved. He was not just mixing in feature code, the code was needed to fix the bug.
bmicraft 2 hours ago [-]
More involved fixes are not something allowed in rc kernels, and Kent knew that.
charcircuit 24 minutes ago [-]
Losing files is a critical bug. Priority 0. For such an impactful bug exceptions should be made to have a proper fix.
sc68cal 5 hours ago [-]
They were not bug fixes
rurban 4 hours ago [-]
They were repair fixes to fix a corrupted filesystem. Which is allowed in a merge window. XFS did the very same before. Ken's code was just a lot.
kouteiheika 12 hours ago [-]
...for now. The policy of Linux is that they don't care about external modules/drivers at all, so once they start removing whatever bcachefs needs because no in-tree filesystem uses it we'll be back to a world of pain. (Unless they make an exception; they sure don't make one for ZFS.)
ThatPlayer 11 hours ago [-]
It doesn't seem to have been completely removed: Bcachefs is still listed among the MAINTAINERS: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin... where only the references to removed files were removed.

For now

alphabetag675 10 hours ago [-]
That's patently untrue. They do not remove stuff, they just keep changing the APi which means that the modules need to keep evolving.
matja 10 hours ago [-]
does removing EXPORT_SYMBOL(__kernel_fpu_end); [0] - which broke ZFS, count as removing stuff or changing the API?

AFAIK that change didn't add functionality or fix any existing issues, other than breaking ZFS - which GKH was absolutely fine with, dismissing several requests for it to be reverted, stating the "policy": [1]

> Sorry, no, we do not keep symbols exported for no in-kernel users.

[0] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux... [1] https://lore.kernel.org/lkml/20190111054058.GA27966@kroah.co...

mnau 9 hours ago [-]
Quite reasonable policy. Add a second line too:

> Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?

Why would you accommodate someone who explicitly went out of their way to not accommodate you?

It took many conflicts with bcachefs developer to reach this state. Olive branch has been extended again and again...

nubinetwork 9 hours ago [-]
Sun doesn't exist anymore, and while openzfs is compatible with older versions of any of Oracle's life support Solaris, it's not the same ecosystem. Yes, the same licensing issues still exist, but openzfs has been developed by LLNL for Linux ever since it was called "zfs for Linux".
capitol_ 9 hours ago [-]
If that ecosystem have changed their values/opinions on that topic, the it wouldn't be an impossible task to dual-license it with a compatible license.

(Hard and tedious work, but not impossible).

SXX 9 hours ago [-]
The only entity that can change of ZFS license is Oracle and they obviously wouldnt do that.
bluGill 8 hours ago [-]
They could rewrite all the code, and then change the license. Patents might still apply (but patents are short enough that I expect if any existed they have expired). However ZFS is a lot of code that is often tricky to get right. It will be really hard to rewrite in a way that the courts don't (reasonably/correctly) say wasn't a rewrite it was just moving some lines so you can claim ownership, but it is possible. By the time anyone knows enough about zfs that they could attempt this they are also too tainted by the existing code.

So of course they won't, but it isn't impossible.

habitue 7 hours ago [-]
I mean, bcachefs is basically the equivalent of rewriting all that code, without explicitly trying to be a clone. Same for btrfs
bluGill 7 hours ago [-]
And how hard it is proves that zfs didn't make a bad choice in not trying the same. (though it would be interesting if either had a goal of a clone - that is same on disk data structures. Interesting but probably a bad decision as I have no doubt there is something about zfs that they regret today - just because the project is more than 10 years old)
ggiesen 8 hours ago [-]
It's supposedly the opinion of Oracle that the CDDL is GPL-compatible and that's the reason they won't do that.
Conan_Kudo 52 minutes ago [-]
Oracle didn't follow that with DTrace. They changed the license away from CDDL when they integrated it into Oracle Linux.
Macha 6 hours ago [-]
I would not rely on the non-binding opinion of a company known for deploying its lawyers in aid of revenue generation
ggiesen 8 hours ago [-]
wbl 7 hours ago [-]
That wasn't exactly the answer.
ggiesen 7 hours ago [-]
Yeah, I agree based in rewatching that I've either misrecalled the original material, or I got it from another source.

I agree that based on that source, it's more like "meh, we don't really care" (until they do)

p_l 8 hours ago [-]
The whole "Sun explicitly did not want" is an invention of one person at a conference, and opposite to what other insiders say
mnau 6 hours ago [-]
Ok, please explain. ZFS is licensed under CDDL, which is incompatible with GPL, aka Kernel license. Sun owned copyright and could easily change license or dual license. They didn't... for reasons (likely related to Solaris).
p_l 4 hours ago [-]
Sun leadership wanted to license OpenSolaris under GPLv3. However, GPLv3 work was dragging on at FSF and the license was not released in time. Moreover, there was opposition from Solaris dev team due to belief that GPLv3 will lock out reuse of OpenSolaris code (especially DTrace and ZFS) in Free/Net/OpenBSD.

CDDL was a compromise choice that was seen as workable for inclusion based especially on certain older views on what code will be compatible or not, and it was unclear and possibly expected that Linux kernel will move to GPLv3 (when it finally releases) which was seen as compatible with CDDL by CDDL drafters.

Alas, Solaris source release could not wait unclear amount of time for GPLv3 to be finalized

mnau 58 minutes ago [-]
So... as I said "Sun explicitly did not want". They chose not to license it under GPLv2 or dual license GPLv2 + GPLv3 for... reasons.

> it was unclear and possibly expected that Linux kernel will move to GPLv3

In what world? Kernel was always GPLv2 without the "or later" clause. Kernel had would tens of thousands of contributors. Linus made it quite obvious by that time kernel will not move to GPLv2 (even in 2006).

Even if I gave them benefit of the doubt, GPLv3 was released in 2007. They had years to make license change and didn't. They were sold to Oracle in 2010.

Sanzig 9 hours ago [-]
Sun is dead and the ZFS copyright transferred to Oracle who then turned it into a closed source product.

The modern OpenZFS project is not part of Oracle, it's a community fork from the last open source version. OpenZFS is what people think of when they say ZFS, it's the version with support for Linux (contributed in large part by work done at Lawrence Livermore).

The OpenZFS project still has to continue using the CDDL license that Sun originally used. The opinion of the Linux team is the CDDL is not GPL compatible, which is what prevents it from being mainlined in Linux (it should be noted not everyone shares this view, but obviously nobody wants to test it in court).

It's very frustrating when people ascribe malice to the OpenZFS team for having an incompatible license. I am sure they would happily change it to something GPL compatible if they could, but their hands are tied: since it's a derivative work of Sun's ZFS, the only one with the power to do that is Oracle, and good luck getting them to agree to that when they're still selling closed source ZFS for enterprise.

chasil 8 hours ago [-]
The battle for ZFS could easily now devolve to IBM and Oracle.

Making /home into a btrfs filesystem would be an opening salvo.

IBM now controls Oracle's premier OS. That is leverage.

sho_hn 7 hours ago [-]
Several large distros use btrfs for /home.
remix2000 8 hours ago [-]
Reading the kernel mailing lists wrt/ bcachefs, it looked more like a cattle prod than an olive branch to me… Kent didn't do nothing other maintainers don't do except make one filesystem that doesn't get irrecoverably corrupted on brownout.

I'm just sorry for the guy and perhaps a little bit sorry for myself that I might have to reformat my primary box at some point…

Also unrelated, but Sun was a very open source friendly company with a wide portfolio of programs licensed under GNU licenses, without some of which Linux would still be useless to the general public.

Overall, designing a good filesystem is very hard, so perhaps don't bite the hand that feeds you…?

simlevesque 8 hours ago [-]
I have no idea if you read the right parts because that's not what happened at all.

The maintainer kept pushing new features at a time when only bugfix are allowed. He also acted like a child when he got asked to follow procedures. Feel sorry for his bad listening and communication abilities.

jacobgkau 7 hours ago [-]
> The maintainer kept pushing new features at a time when only bugfix are allowed.

The "new features" were recovery features for people hit by bugs. I can see where the ambiguity came from.

matja 9 hours ago [-]
"accommodate" in this instance would have been accomplished by doing nothing. The Linux kernel developers actively made this change.
mnau 9 hours ago [-]
Doing "nothing" in this case seems to be leaving technical debt in a code.

I am not kernel developer, but less exposed API/functions is nearly always better.

The removed comment of function even starts with: Careful: __kernel_fpu_begin/end() must be called with

swinglock 3 hours ago [-]
Though it's rich coming from a kernel lacking a better filesystem of its own.
timeon 9 hours ago [-]
Not sure if the thread is about how reasonable is the policy or if it is patently untrue that things get removed.
darthcloud 5 hours ago [-]
I was curious how OpenZFS worked around that and found [0] & [1]

[0] https://github.com/openzfs/zfs/issues/8259

[1] https://github.com/openzfs/zfs/pull/8965

AndrewDavis 9 hours ago [-]
That's all a matter of perspective.

Is moving a symbol from EXPORT_SYMBOL(some_fun) to EXPORT_SYMBOL_GPL(some_func) actually changing the API? Nope, the API is exactly the same as it was before, it's just changed who is allowed to use it.

From the perspective of an out of tree module that isn't GPL you have removed stuff.

I'm honestly not sure how one outside the kernel community could construe that as not removing something.

happymellon 9 hours ago [-]
They could always move to a compatible license?

No, it was always designed to be hostile to Linux from the outset. It's a project that doesn't want to interoperability with Linux so I'm not entirely sure why you think the Linux folks should maintain an API for them.

toast0 6 hours ago [-]
> They could always move to a compatible license?

Linux and OpenZFS are pretty much locked into their licenses, regardless of what people might want today. There are too many contributors to Linux to relicense, and while OpenZFS has fewer, I don't think there's any reason to think Oracle would relicense, given they went back to closed source with Solaris and ZFS on Solaris.

> It's a project that doesn't want to interoperability with Linux.

Regardless of the original intent of Sun in picking the license, it's hard to imagine a project called ZFS on Linux (which was merged into OpenZFS) doesn't want to interoperate with Linux.

p_l 8 hours ago [-]
It wasn't designed to be hostile to Linux - insider claims they expected integration within weeks, the license kerfuffle was a surprise.

The choice of creating a new license was because of two reasons:

- Internally people wanted for the code to be usable by not just Linux and Solaris (lots of BSD fans, for example)

- Sun was insisting on mutual patent protection clauses because GPLv2 didn't support them, and GPLv3 was not yet available to discuss viability at all.

Sanzig 8 hours ago [-]
You are ascribing motives to the OpenZFS project that aren't there. Sun was the one that licensed it CDDL, and OpenZFS is a fork that was created when Oracle bought Sun and decided to close source ZFS. Oracle has zero involvement with OpenZFS.

Since the pre-fork code is from Sun, Oracle owns the copyright, and they won't re-license it.

The idea that the OpenZFS team wants CDDL out of spite for Linux is an absurd conspiracy theory. Their hands are tied - I'm sure they'd move to a compatible license if they could, but they can't.

p_l 8 hours ago [-]
CDDL ensures OpenZFS can be used for not just Linux, and prevents patent-based attacks (which have been used against GPLv2 code reuse in the past).

So the OpenZFS team is not exactly interested in moving to GPLv2, because it would break multiple platforms.

Sanzig 7 hours ago [-]
I doubt the OpenZFS team would move to GPLv2 if they were able to relicense to anything they wanted. Given their close association with FreeBSD, BSD-2 or a similar permissive license wouldn't shock me.

But it's an academic exercise anyway, since it seems Oracle has no intention of allowing them to relicense.

tarruda 12 hours ago [-]
I hope it eventually comes back once it is more stable.

Would be great to have an in kernel alternative to ZFS for parity RAID.

masklinn 12 hours ago [-]
It was not removed due to instability but due to the maintainer’s inability to respect guidelines set by others when they don’t personally agree.

This is not the first project for which this was an issue, and said maintainer has shown no will to alter their behaviour before or since.

burnte 7 hours ago [-]
Yep. All people asked him to do was slow down a bit because they felt it was too much change at once. He refused for any reason other than his own to slow down. H said he only saw three reasons to slow down and none of them applied so Linus should just accept my patch now.

I never understand why some people are unwilling to make any attempt at getting along. Some people seem to feel any level of compromise is too much.

5 hours ago [-]
akimbostrawman 11 hours ago [-]
He justified breaking the guidelines to address critical issues. one can hope these kind of problems would not happen that frequently in a stable project, besides it is still experimental.
jeltz 8 hours ago [-]
What he actually did was that he bundled the fix for the critical issue with dubugging tools and a totally new experimental feature. I totally get why they stopped working with him.
bcrl 8 hours ago [-]
A feature that was put in place to allow users encountering the bug to recover their data. It's not as black and white as you are portraying.
jeltz 8 hours ago [-]
It was still an entirely new and experimental feature which had not been properly reviewed. Why couldn't this feature wait until next kernel version? Other file systems have had their recovery tools improved over many years.
bcrl 7 hours ago [-]
Filesystems like ext2/3/4 have their recovery tools in userland. Most of the recovery features in bcachefs are in the kernel. As a result, it is inevitable that at some point there was and will be a need to push a new feature into a stable release for the purpose of data recovery.

Over the long term the number of cases where such a response is needed will decrease as expected.

Do you really want to live in a world where data losses in stable releases is considered Okay?

stycznik 6 hours ago [-]
>it is inevitable that at some point there was and will be a need to push a new feature into a stable release for the purpose of data recovery

It's really not, the proper way to recover your important data is to restore from backups, not to force other people to bend longstanding rules for you.

>Do you really want to live in a world where data losses in stable releases is considered Okay?

Bcachefs is an experimental filesystem.

streb-lo 6 hours ago [-]
That's a good argument to keep recovery tools in userland rather than bend the kernel around them.

Why do they need to be in the kernel anyways? Presumably they are running on an unmounted device?

bcrl 6 hours ago [-]
No, it is not. bcachefs needs to have all the code for error recover in the kernel as it needs to be available when a storage device fails in any of a myriad of ways.

Maintaining a piece of code that needs to run in both user space and the kernel is messy and time consuming. You end up running into issues where dependencies require the porting of gobs of infrastructure from the kernel into userspace. That's easy for some thing, very hard for others. There's a better place to spend those resource: by stabilizing bcachefs in the kernel where it belongs.

Other people have tried and failed at this before, and I'm sure that someone will try the same thing again in the future and relearn the same lesson. I know as business requirements for a former employer resulted in such a beast. Other people thought they could just run their userspace code in the kernel, but they didn't know about limits on kernel stack size, they didn't know about contexts where blocking vs non-blocking behaviour is required or how that interacted with softirqs. Please, just don't do this or advocate for it.

Denvercoder9 5 hours ago [-]
bcachefs in the upstream kernel was explicitly marked as being experimental, you can't consider it a stable release.
irusensei 7 hours ago [-]
The fact you got downvoted makes me shake my head. One could still interpret this as contributor violation and thats fair.

If I'm not mistaken Kent pushed recovery routines in the RC to handle some catastrophic bug some user caused by loading the current metadata format into an old 6.12 kernel.

It isn't some sinister "sneaking features". This fact seems to be omitted by clickbaity coverage over the situation.

bcrl 6 hours ago [-]
As I pointed out elsewhere, there was another -rc release put out shortly after that effectively added back in a feature that was removed 10 releases back. Granted, it was only a small thing, but it shows that there is nuance in application of the rule.

Rule 1: don't assume malice.

mrweasel 10 hours ago [-]
My take away from trying to follow the discussion on the kernel mailing list was that the Bcachefs developer want to work in a certain way, that Linus does not think that fit in with the rest of the kernel (to put it mildly). Having Bcachefs in the kernel certainly helps with adoption, but I can't help thinking that a kernel module might be more inline with the development process that Bcachefs wants.

The underlying problem might have been importing Bcachefs into the mainline kernel to early in it's life cycle.

jeltz 8 hours ago [-]
No, this is a pure people problem which would have happened no matter the state of Bcachefs. Kent refuses to respect other people's time and rules since that would require him to change how he works.
typpilol 3 hours ago [-]
On the other hand, Linus is constantly changing the submit times based on his own personal travel.

A lot of people aren't going to keep up with Linus personal travel plans just so they don't send a late patch.

mrktf 10 hours ago [-]
As for occasional follower, my opinion is that: Kent overdid with bending rules until Linus & co got fed up.
happymellon 9 hours ago [-]
> He justified breaking the guidelines to address critical issues

That claim was to add new logging functionality to allow better troubleshooting to eventually address critical issues.

This should have been out of trunk for someone to test, rather than claiming it to be something that wasn't strictly true. Especially when it's the kernel.

ranger_danger 10 hours ago [-]
Except he had a history of rushing large changes in at the last minute that were always critical, and would constantly argue about policy during the same time, which is not the appropriate time or place.

He refused to acknowledge his place on the totem pole and thought he knew better than everyone else, and that they should change their ways to suit his whims.

cogman10 9 hours ago [-]
The way I read it, it was wrapping in feature and bug fix changes. "We found a critical bug here, to fix it instead of backporting a fix we want to pull in all the code which was built before the bug was discovered".

I can understand the motivation. It's a PITA to support an older version of code. But that's not how linux gets it's stability.

MBCook 8 hours ago [-]
He also had bad interactions with other developers, like constantly shitting on other file systems and generally behaving like a jerk completely unnecessarily.
TheCraiggers 8 hours ago [-]
That's just bad git hygiene, and lots of lead devs deal with this across the development world. One change per commit/PR please.
cogman10 8 hours ago [-]
I think it's more natural.

    Commit A: introduce the bug
    Commit B: change architecture
    Commit C: add a feature
    Commit D: fix A using code present in B and C.
The issue ends up being that D needs to be reimplemented to fix A because B and C don't exist on the tip.

Since linux has closed windows and long term kernels it means the fix to the same bug could need to be done in multiple ways.

Multiple changes per PR is bad, but I assume it's still one change per commit.

TheCraiggers 23 minutes ago [-]
Yeah, but then you run into scenarios where A+D is tested and ready, but B and/or C are not. Git does give you tools to separate them, but most people don't like doing that for various reasons.

IMHO, it may be more natural, but only during development. Trying to do a git bisect on git histories like the above is a huge pain. Trying to split things up when A is ready but B/C are not is a huge pain.

typpilol 3 hours ago [-]
Linux is not known for its stable ABI lol
snickerdoodle14 10 hours ago [-]
[dead]
bityard 7 hours ago [-]
I hope it comes back too, just not with Kent as the lead developer.
lproven 11 hours ago [-]
> I hope it eventually comes back once it is more stable.

Yes, me too.

> Would be great to have an in kernel alternative to ZFS

Yes it would.

> for parity RAID.

No.

Think of the Pareto Principle here. 80% of the people only use 20% of the functionality. BUT they don't all use the same 20% so overall you need 80% of the functionality... or more.

ZFS is one of the rivals here.

But Btrfs is another. Stratis is another. HAMMER2 is another. MDRAID is another. LVM is another.

All proviude some or all of that 20% and all have pros and cons.

The point is that, yes, ZFS is good at RAID and it's much much easier than ext4 on MDRAID or something.

Btrfs can do that too.

But ZFS and Btrfs do COW snapshots. Those are important too. OpenSUSE, Garuda Linux, siduction and others depend on Btrfs COW.

OK, fine, no problem, your use case is RAID. I use that too. Good.

But COW is just as important.

Integrity is just as important and Btrfs fails at that. That is why the Bcachefs slogan is "the COW filesystem that won't eat your data."

Btrfs ate my data 2-3 times a year for 4 years.

Doesn't matter how many people who praise it, what matters are the victims who have been burned when it fails. They prove that it does fail.

The point is not "I can do that with ext4 on mdraid" or "I can do that with LVM2" or "Btrfs is fine for me".

The point is something that can do _all of these_ and do it _better_ -- and here, "better" includes "in a simpler way".

Simpler here meaning "simpler to set up" and also "simpler in implementation" (compared to, say, Btrfs on LVM2, or Btrfs on mdraid, or LVM on mdraid, or ext4 on LVM on RAID.

Something that can remove entire layers of the stack and leave the same functionality is valuable.

Something that can remove 90% of the setup steps and leave identical functionality matters... Because different people do those steps in different order, or skip some, and you need to document that, and none of us document stuff enough.

The recovery steps for LVM on RAID are totally different from RAID on LVM. The recovery for Btrfs on mdraid is totally different from just Btrfs RAID.

This is why tools that eliminate this matter. Because when it matters whether you have

1 - 2 - 3 - 4 - 5

or

1 - 2 - 4 - 3 - 5

Then the sword that chops the Gordian knot here is one tool that does 1-5 in a single step.

This remains true even if you only use 1 and 5, or 2 and 3, and it still matters if you only do 4.

pessimizer 5 hours ago [-]
As far as I know, ZFS is either for smart people who want to do something sophisticated or trendy people who want to do something unwise.

> ext4 on MDRAID or something

Are trivially easy to set up, expand, or replace drives; require no upkeep; and no setup when placed into entirely different systems. Anybody using ZFS or ZFS-like to do some trivial standard RAID setup (unless they are used to and comfortable with ZFS, which is an entirely different story) is just begging to lose data. MDADM is fine.

yjftsjthsd-h 5 hours ago [-]
> As far as I know, ZFS is either for smart people who want to do something sophisticated or trendy people who want to do something unwise.

Or people who want data checksums.

> Anybody using ZFS or ZFS-like to do some trivial standard RAID setup (unless they are used to and comfortable with ZFS, which is an entirely different story) is just begging to lose data.

How? You just... hand it some devices, and it makes a pool. Drive replacement is a single command.

ZoomZoomZoom 11 hours ago [-]
More stable than what?

I have a multidevice filesystem, comprised of old HDDs and one sketchy PCI-SATA extension. This FS was assembled in 2019 and, though it went through periods of being non-writable, is still working and I haven't lost any[1] data. This is more than 5 years, multitude of FS version upgrades, multiple device replacements with corresponding data evacuation and rereplication.

[1] Technically, I did lose some, when a dying device started misbehaving and writing garbage, and I was impatient and ran a destructive fsck (with fix_errors) before waiting for a bug patch.

Don't want to compare it to other solutions but this is impressive even on its own merits.

tarruda 10 hours ago [-]
> More stable than what?

IIRC the whole drama began because Kent was constantly pushing new features along with critical bug fixes after the proper merge window.

I meant stable in the sense where most changes are bug fixes, reducing the friction of working within the kernel schedules.

MBCook 8 hours ago [-]
It was also an attitude/civility thing in addition to the code stuff.
maxlin 8 hours ago [-]
At least some of the OSS drama still is just purely code-based these days...

The dev acted out of line for kernel development, even if _kind_ of understandable (like with the recovery tool), but still in a way that would set a bad precedent for the kernel, so this appears to be good judgement from Linus.

Hope the best for Bcachefs's future

wizardforhire 8 hours ago [-]
I was one week away from setting up a new cluster and was all all in on bcachefs, drama be damned … that was until this[1]

Bcachefs is exciting on paper, but even just playing around there are some things that are just untenable imho. Time has proven that the stability of a project stems from the stability of the teams and culture behind it. As such the numbers don’t lie and unless it can be at parity with existing filesystems I can’t be bothered to forgive the misgivings. I’m looking forward to the day when bcachefs matures… if ever, as it is exciting.

Also if something has changed in the last year I’d love to hear about it! I just haven’t found anything compelling enough yet to risk my time bsing around with it atm.

[1] https://youtube.com/watch?v=_RKSaY4glSc&pp=ygUZTGludXMgZmlsZ...

fer 9 hours ago [-]
I have high hopes for bcachefs, but so far the benchmarks[0] are a quite disappointing. I understand it'll have overhead since it does many things, but I'd expect it to perform closer to btrfs or zfs, but it's consistently abysmal (which affects zfs at times, too).

[0] https://www.phoronix.com/review/linux-617-filesystems

pantalaimon 7 hours ago [-]
https://www.phoronix.com/review/bcachefs-617-dkms/2

has the benchmarks of the dkms module

odo1242 7 hours ago [-]
Why is the DKMS module so much faster than the original one? Just wondering lol
cpmsmith 6 hours ago [-]
I think it has improvements that were never upstreamed to the kernel, based on the developer's comments elsewhere[0].

[0]: https://www.phoronix.com/forums/forum/software/general-linux...

barrkel 8 hours ago [-]
It's hard to take those benchmarks too seriously. ZFS, btrfs and I guess bcachefs - which I've never used and don't have any opinion on - do things XFS and EXT4 don't and can't do.

I know more about ZFS than the others. It wasn't specified here whether ZFS had ashift=9 or 12; it tries to auto-detect, but that can go wrong. ashift=9 means ZFS is doing physical I/O in 512 bytes, which will be an emulation mode for the nvme. Maybe it was ashift=12. But you can't tell.

Secondly, ZFS defaults to a record size of 128k. Write a big file and it's written in "chunks" of 128k size. If you then run a random read/write I/O benchmark on it with a 4k block size, ZFS is going to be reading and writing 128k for every 4k of I/O. That's a huge amplification factor. If you're using ZFS for a load which resembles random block I/O, you'll want to tune the recordsize to the app I/O. And ZFS makes this easy, since child filesystem creation is trivially cheap and the recordsize can be tuned per filesystem.

And then there's the stuff things like ZFS does that XFS / EXT4 doesn't. For example, taking snapshots every 5 minutes (they're basically free), doing streaming incremental snapshot backups, snapshot cloning and so on - without getting into RAID flexibility.

cpmsmith 6 hours ago [-]
I don't think any of that means the benchmarks shouldn't be taken seriously. GP didn't say they expect Bcachefs to perform like EXT4/XFS, they said they expected more like Btrfs or ZFS, to which it has more similar features.

On the the configuration stuff, these benchmarks intentionally only ever use the default configuration – they're not interested in the limits of what's possible with the filesystems, just what they do "out of the box", since that's what the overwhelming majority of users will experience.

barrkel 6 hours ago [-]
Anyone who uses zfs out of the box in a way substitutable with xfs, shouldn't. So I guess they serve a purpose that way. But that argument doesn't need any numbers at all.
yjftsjthsd-h 5 hours ago [-]
> Anyone who uses zfs out of the box in a way substitutable with xfs, shouldn't.

Substitutable how? Like, I'm typing this on a laptop with a single disk with a single zpool, because I want 1. compression, 2. data checksums, 3. to not break (previous experiments with btrfs ended poorly). Obviously I could run xfs, but then I'd miss important features.

Eikon 5 hours ago [-]
> If you're using ZFS for a load which resembles random block I/O, you'll want to tune the recordsize to the app I/O.

You probably don't want to do that because that'll result in massive metadata overhead, and nothing tells you that the app's I/O operations will be nicely aligned, so this cannot be given as general advice.

BearOso 9 hours ago [-]
Those benchmarks were messed up. Notice bcachefs is the only one using 512b block size. That's going to massively increase overhead.
Ardren 8 hours ago [-]
> I very much doubt that's the main issue - the multithreaded sqlite performance makes me wonder if something's up with our fsync performance. I'll wait to see the results with the DKMS version, and if the numbers are still bad I'll have to see if I can replicate it and start digging.

https://www.phoronix.com/forums/forum/software/general-linux...

charcircuit 7 hours ago [-]
It's sad that it came with this, but in the end Linus and Kent had different ideas on how distribution of fixes should work so it makes sense that we have reached a situation where Kent controls the distribution frequency of the file system.
9 hours ago [-]
lesser-shadow 6 hours ago [-]
[dead]
globalhsbc 11 hours ago [-]
[dead]
Jahez1219 8 hours ago [-]
[flagged]
mnau 10 hours ago [-]
I don't get why Linus just didn't tell bcachefs developer to take a hike.

He is BDFL. No, these changes do not belong into this part of our release window. No pull. End of discussion. Instead he always talked and caved and pulled. And of course situation repeated, as they do...

bombcar 8 hours ago [-]
Linus did tell him to take a hike, and this is exactly what this is.

Perhaps as BDFL he let it slip a few too many times, but that's generally the way you want to go - as a leader, you want to trust your subordinates are doing the right thing; which means that you'll get burned a few times until you have to take action (like this).

The only other option makes you into a micromanager, which doesn't scale.

jeltz 8 hours ago [-]
Maybe Linus should have acted sooner but this is exactly what happened. Linus refused to bend the rules for Kent and things got heated and eventually Kent was banned from submitting patches and Bcachefs was removed.
dlivingston 8 hours ago [-]
What's the backstory here? I'm totally out of the loop.
jeltz 8 hours ago [-]
Kent wanted to merge new features after the merge window had been closed (bundled with a bug fix) and Linus said no and since this was not the first time Kent tried to do something like that people got angry at him. And since Kent refused to apologize for repeatedly breaking the rules things got heated.
dralley 7 hours ago [-]
In Kent's opinion the features were important to maintaining greater data integrity and recovering from various problems. Nobody else agreed and Kent wasn't taking no for an answer even from Linus.
dlivingston 5 hours ago [-]
I see, thanks. I came across this summary which echoes what you wrote: https://www.phoronix.com/news/Linux-616-Bcachefs-Late-Featur...
aidenn0 8 hours ago [-]
A manager must trust those people who are closer to the problem to make such decisions. When, in hindsight, it looks like the trust is habitually abused, the scalable solution isn't to micromanage that person, it's to fire them.
arccy 9 hours ago [-]
this is telling him to take a hike?
tpetry 4 hours ago [-]
Linus told him some time ago to take a hile. Kent was blocked from one (or two?) previous merge windows long ago to behave like expected from someone contributing to Linux. He didnt change. It was drama since he was first added to the kernel. This was really not the first issue.
mnau 6 hours ago [-]
I meant not accepting pulls during rc phase, not kick him out. Instead he always sighed and accepted new features alongside with bug fixes.
pluto_modadic 8 hours ago [-]
to take a hike sooner*
WesolyKubeczek 7 hours ago [-]
> I don't get why Linus just didn't tell bcachefs developer to take a hike.

> He is BDFL.

As far as I remember, "B" in "BFDL" stands for "benevolent". This usually might mean give a couple of warnings, give a benefit of doubt, extend some credit, and if that doesn't help, invoke the "D".

bagxrvxpepzn 7 hours ago [-]
Bcachefs comes off as a vanity project, as most open source software seems to be. The public rationale for it also strongly projects NIH. Therefore, its demise as everyone comes to grips with that is not very surprising. Hopefully this development serves to inoculate the kernel community against future wastes of resources. Perhaps the vetting process will become more rigorous before big merges like this.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:32:54 GMT+0000 (Coordinated Universal Time) with Vercel.