ml.ruby-lang.org
Sign In Sign Up
Manage this list Sign In Sign Up

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

ruby-core

Thread Start a new thread
Download
Threads by month
  • ----- 2026 -----
  • March
  • February
  • January
  • ----- 2025 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
ruby-core@ml.ruby-lang.org

February 2026

  • 2 participants
  • 111 discussions
[ruby-core:124711] [Ruby Feature#21869] Add receive_all Method to Ractor API for Message Batching
by synacker (Mikhail Milovidov) 07 Mar '26

07 Mar '26
Issue #21869 has been reported by synacker (Mikhail Milovidov). ---------------------------------------- Feature #21869: Add receive_all Method to Ractor API for Message Batching https://bugs.ruby-lang.org/issues/21869 * Author: synacker (Mikhail Milovidov) * Status: Open ---------------------------------------- **Summary** The Ractor API provides an excellent mechanism for inter‑thread communication, but it currently lacks a built‑in message batching technique. I propose adding a receive_all method to enable batch processing of messages, which can significantly improve performance in high‑load scenarios. **Motivation** In distributed queued systems, processing messages one‑by‑one (as with the current receive method) can introduce unnecessary overhead. Batch processing allows: Reduced context‑switching overhead. More efficient I/O operations (e.g., fewer file writes). Better throughput in high‑concurrency environments. **Proposed Solution** Add a receive_all method to the Ractor API that: Returns all available messages in the Ractor’s mailbox at once (as an array). **Demonstration Code** Below is a benchmark comparing individual receive vs. batch receive_all: ``` ruby require 'benchmark' class RactorsTest def initialize(count) @count = count @ractor1 = Ractor.new(count, 'output1.txt') do |count, filename| File.open(filename, 'w') do |file| while count.positive? message = receive file.write("Ractor 1 received message: #{message}\n") file.flush count -= 1 end end end @ractor2 = Ractor.new(count, 'output2.txt') do |count, filename| File.open(filename, 'w') do |file| while count.positive? messages = receive_all messages.each do |message| file.write("Ractor 2 received message: #{message}\n") end count -= messages.length file.flush end end end end def run1 @count.times do |i| @ractor1.send("Message #{i + 1}") end @ractor1.join end def run2 @count.times do |i| @ractor2.send("Message #{i + 1}") end @ractor2.join end end records = 1_000_000 test = RactorsTest.new(records) p [:once, Benchmark.realtime { test.run1 }.round(2)] p [:all, Benchmark.realtime { test.run2 }.round(2)] ``` **Benchmark Results** On my system, receive_all shows ~4x improvement over individual receive: **Key Observations:** Ractor1 (using receive): Processes each message individually, resulting in frequent I/O calls. Ractor2 (using receive_all): Processes all queued messages at once, minimizing I/O overhead -- https://bugs.ruby-lang.org/
4 6
0 0
[ruby-core:124741] [Ruby Bug#21873] `UnboundMethod#==` returns false for methods obtained via extend + unbind
by mdalessio (Mike Dalessio) 07 Mar '26

07 Mar '26
Issue #21873 has been reported by mdalessio (Mike Dalessio). ---------------------------------------- Bug #21873: `UnboundMethod#==` returns false for methods obtained via extend + unbind https://bugs.ruby-lang.org/issues/21873 * Author: mdalessio (Mike Dalessio) * Status: Open * Backport: 3.2: UNKNOWN, 3.3: UNKNOWN, 3.4: UNKNOWN, 4.0: UNKNOWN ---------------------------------------- ## Description `UnboundMethod#==` returns `false` when comparing a module's instance method against the same method obtained via `Method#unbind` on a class that includes or extends that module, despite having the same owner and source location. ```ruby module MyMethods def hello = "hello" end class Base extend MyMethods end from_module = MyMethods.instance_method(:hello) from_unbind = Base.method(:hello).unbind p from_module.owner == from_unbind.owner #=> true p from_module.source_location == from_unbind.source_location #=> true p from_module.inspect == from_unbind.inspect #=> true p from_module == from_unbind #=> false (expected true) ``` ## Diagnosis `method_eq` compares method entries using `method_entry_defined_class`. For methods mixed in via `include`/`extend`, the "defined class" will be an ICLASS and not the original class/module. In the example above, `method_eq` is comparing a module's ICLASS entry against the module, and that will always be false. Related: #18798 (fixed by @ko1 in 59e389af). -- https://bugs.ruby-lang.org/
6 6
0 0
[ruby-core:124673] [Ruby Bug#21860] Process.fork: the child may deadlock on `th->interrupt_lock` in `threadptr_interrupt_exec_cleanup`
by byroot (Jean Boussier) 07 Mar '26

07 Mar '26
Issue #21860 has been reported by byroot (Jean Boussier). ---------------------------------------- Bug #21860: Process.fork: the child may deadlock on `th->interrupt_lock` in `threadptr_interrupt_exec_cleanup` https://bugs.ruby-lang.org/issues/21860 * Author: byroot (Jean Boussier) * Status: Open * ruby -v: ruby 3.4.4 (2025-05-14 revision a38531fd3f) +PRISM [aarch64-linux] * Backport: 3.2: UNKNOWN, 3.3: UNKNOWN, 3.4: UNKNOWN, 4.0: UNKNOWN ---------------------------------------- We recently observed some deadlocked processes. These got deadlock during child initialization right after `Process.fork`. Based on the `gdb` session, the child is deadlocked in `threadptr_interrupt_exec_cleanup`, which suggest `th->interrupt_lock` was locked by another thread in the parent, and wasn't reinitialized in the children before calling `threadptr_interrupt_exec_cleanup`. ```c (gdb) bt 20 #0 0x0000ffff855a09c0 in __lll_lock_wait () from /lib64/libc.so.6 #1 0x0000ffff855a6d60 in pthread_mutex_lock@@GLIBC_2.17 () from /lib64/libc.so.6 #2 0x0000aaaaaeec04c8 [PAC] in rb_native_mutex_lock (lock=<optimized out>) at /ruby-3.4.4/thread_pthread.c:116 #3 threadptr_interrupt_exec_cleanup (th=<optimized out>) at thread.c:6052 #4 thread_cleanup_func_before_exec (th_ptr=0xffff35803000) at thread.c:514 #5 thread_cleanup_func (atfork=1, th_ptr=0xffff35803000) at thread.c:524 #6 terminate_atfork_i (current_th=0xffff84e1e000, th=0xffff35803000) at thread.c:4769 #7 rb_thread_atfork_internal (atfork=<optimized out>, th=0xffff84e1e000) at thread.c:4736 #8 rb_thread_atfork () at thread.c:4779 #9 0x0000aaaaaee110fc [PAC] in after_fork_ruby (pid=0) at process.c:1693 #10 rb_fork_ruby (status=status@entry=0x0) at process.c:4253 #11 0x0000aaaaaee11154 [PAC] in proc_fork_pid () at process.c:4266 #12 rb_proc__fork (_obj=<optimized out>) at process.c:4313 #13 0x0000aaaaaeefeb7c [PAC] in vm_call_cfunc_with_frame_ (stack_bottom=0xffff84e7a1c8, argv=0xffff84e7a1d0, argc=0, calling=<optimized out>, reg_cfp=0xffff84f79150, ec=0xffff84e31050) at /ruby-3.4.4/vm_insnhelper.c:3794 #14 vm_call_cfunc_with_frame (ec=0xffff84e31050, reg_cfp=0xffff84f79150, calling=<optimized out>) at /ruby-3.4.4/vm_insnhelper.c:3840 #15 0x0000aaaaaef1aa44 [PAC] in vm_sendish (method_explorer=<optimized out>, block_handler=<optimized out>, cd=<optimized out>, reg_cfp=<optimized out>, ec=<optimized out>) at /ruby-3.4.4/vm_callinfo.h:415 #16 vm_exec_core (ec=0xffff84e31050) at /ruby-3.4.4/insns.def:1063 #17 0x0000aaaaaef0aa58 [PAC] in rb_vm_exec (ec=ec@entry=0xffff84e31050) at vm.c:2595 #18 0x0000aaaaaef0fe48 [PAC] in vm_call0_body (ec=ec@entry=0xffff84e31050, calling=calling@entry=0xffffea5a1e78, argv=argv@entry=0x0) at /ruby-3.4.4/vm_eval.c:225 #19 0x0000aaaaaef136e8 in vm_call0_cc (kw_splat=0, cc=0xfffeee654508, argv=<optimized out>, argc=0, id=27393, recv=281472906794680, ec=0xffff84e31050) at /ruby-3.4.4/vm_eval.c:101 ``` -- https://bugs.ruby-lang.org/
2 3
0 0
[ruby-core:124699] [Ruby Bug#21866] Backport Fix for integer overflow checks in enumerator
by rwstauner (Randy Stauner) 07 Mar '26

07 Mar '26
Issue #21866 has been reported by rwstauner (Randy Stauner). ---------------------------------------- Bug #21866: Backport Fix for integer overflow checks in enumerator https://bugs.ruby-lang.org/issues/21866 * Author: rwstauner (Randy Stauner) * Status: Open * Assignee: rwstauner (Randy Stauner) * Target version: 4.1 * Backport: 3.2: UNKNOWN, 3.3: UNKNOWN, 3.4: REQUIRED, 4.0: REQUIRED ---------------------------------------- I would like to backport this PR https://github.com/ruby/ruby/pull/15829 that has already merged to master. -- https://bugs.ruby-lang.org/
3 5
0 0
[ruby-core:123913] [Ruby Bug#21711] Prism and parse.y parses private endless method definition with block differently
by tompng (tomoya ishida) 07 Mar '26

07 Mar '26
Issue #21711 has been reported by tompng (tomoya ishida). ---------------------------------------- Bug #21711: Prism and parse.y parses private endless method definition with block differently https://bugs.ruby-lang.org/issues/21711 * Author: tompng (tomoya ishida) * Status: Open * ruby -v: ruby 4.0.0dev (2025-11-26T06:41:42Z master 43ed35de6c) +YJIT +MN +PRISM [arm64-darwin24] * Backport: 3.2: UNKNOWN, 3.3: UNKNOWN, 3.4: UNKNOWN ---------------------------------------- In the following code, `do end` block is passed to `private` in Prism but passed to `tap` in parse.y ~~~ruby private def f = tap do end f # different result(prism: LocalJumpError, parsey: returns main) ~~~ According to https://bugs.ruby-lang.org/issues/17398#note-10, `private def hello = puts "Hello" do expr end` should be parsed as `private (def hello = puts "Hello") do expr end`. This is correctly implemented in both Prism and parse.y, but when rhs is `tap do end`, there is a discrepancy. Another example. Prism: parse success, parse.y: syntax error. ~~~ruby private def f = 1 do end ~~~ -- https://bugs.ruby-lang.org/
2 2
0 0
[ruby-core:124864] [Ruby Misc#21922] Permissions for committers for default/bundled/unbundled gems repositories
by Eregon (Benoit Daloze) 06 Mar '26

06 Mar '26
Issue #21922 has been reported by Eregon (Benoit Daloze). ---------------------------------------- Misc #21922: Permissions for committers for default/bundled/unbundled gems repositories https://bugs.ruby-lang.org/issues/21922 * Author: Eregon (Benoit Daloze) * Status: Open ---------------------------------------- I noticed recently that the team `ruby-committers` on GitHub no longer has write access to at least: * https://github.com/ruby/benchmark * https://github.com/ruby/cmath * https://github.com/ruby/curses * https://github.com/ruby/dbm * https://github.com/ruby/e2mmap * https://github.com/ruby/gdbm * https://github.com/ruby/getoptlong * https://github.com/ruby/iconv * https://github.com/ruby/mathn * https://github.com/ruby/mutex_m * https://github.com/ruby/net-ftp * https://github.com/ruby/net-pop * https://github.com/ruby/net-telnet * https://github.com/ruby/observer * https://github.com/ruby/pathname (marked as maintained by @akr but they don't reply on GitHub, there is also an unclear relation with core Pathname [which still hasn't been resolved](https://github.com/ruby/pathname/issues/66) and is causing warnings for months) * https://github.com/ruby/prime * https://github.com/ruby/pstore * https://github.com/ruby/readline * https://github.com/ruby/readline-ext * https://github.com/ruby/ruby2_keywords * https://github.com/ruby/scanf * https://github.com/ruby/sdbm * https://github.com/ruby/set * https://github.com/ruby/shell * https://github.com/ruby/syck * https://github.com/ruby/sync * https://github.com/ruby/thwait * https://github.com/ruby/tk * https://github.com/ruby/tracer * https://github.com/ruby/webrick * https://github.com/ruby/win32api * https://github.com/ruby/xmlrpc This list is from a couple cases I noticed myself + [all repos](https://github.com/orgs/ruby/repositories?q=mirror%3Afalse+fork%3Afa… - [those committers have access](https://github.com/orgs/ruby/teams/ruby-committers/repositories) - [repos with known maintainers](https://github.com/ruby/ruby/blob/master/doc/maintainers.md)) I filtered manually so there could be some mistake(s), though I tried to check carefully. I am certain CRuby committers had access to some of these repositories (e.g. I merged PRs there), but not sure about all, some might already not have had write access for CRuby committers. It seems only the 4 owners of the Ruby GitHub organization have write access to these repositories. What motivated these changes? I believe it is valuable that all CRuby committers can merge to default/bundled/unbundled gems repositories *without active maintainers*, as it was before. There is [this list](https://github.com/ruby/ruby/blob/master/doc/maintainers.md) to define maintainers, though it's a little bit outdated and inaccurate. It's fine enough for this issue though. (A better definition IMO for active maintainers would be maintainers would actually respond to PRs and issues on GitHub to these repositories otherwise they are effectively not maintaining that repository, at least from an external perspective.) IOW it seems unreasonable to always have to ask one of the 4 owners of the Ruby GitHub organization to merge a PR to such repositories, as it would be a significant overhead for committers and for owners, and it would delay merging PRs significantly. I'm thinking for example to * documentation PRs ([many for pathname](https://github.com/ruby/pathname/pulls)) which really shouldn't need an owner to merge * PRs to improve/fix the CI ([example](https://github.com/ruby/readline-ext/pull/29)) * PRs fixing compatibility with recent changes in ruby's master branch * etc. Yet another way to see this is many default/bundled/unbundled gems do not have active maintainers. AFAIK so far in such cases then all CRuby committers could help, but this seems no longer the case. (FWIW I saw there a `default-gems-contributor` team with 3 people, which explains why they can merge PRs to some repositories that ruby committers can't for example.) -- https://bugs.ruby-lang.org/
2 2
0 0
[ruby-core:121627] [Ruby Feature#21264] Extract Date library from Ruby repo in the future
by hsbt (Hiroshi SHIBATA) 06 Mar '26

06 Mar '26
Issue #21264 has been reported by hsbt (Hiroshi SHIBATA). ---------------------------------------- Feature #21264: Extract Date library from Ruby repo in the future https://bugs.ruby-lang.org/issues/21264 * Author: hsbt (Hiroshi SHIBATA) * Status: Open ---------------------------------------- Note: This is not for Ruby 3.5. `Date` and `DateTime` has no primary maintainer in 10+ years. I would like to deprecate `date` via bundled gems for reducing our maintenance time especially @nobu. But `Time.prase` and `Time.strptime` are widely used now. How do we deprecate `date` library? 1. Migrate `Date._strptime`, `Date.strptime` and `Date._parse` to `Time`. The current `Date` is migrated as bundled gems. 2. Migrate `Date` to the bundled gems. `Time.parse` and `Time.strptime` warns if `date` is not found. 3. Keep the current situation 4. ... Does anyone have another idea? -- https://bugs.ruby-lang.org/
4 8
0 0
[ruby-core:124797] [Ruby Feature#21875] Handling of trailing commas in lambda parameters
by Earlopain (Earlopain _) 06 Mar '26

06 Mar '26
Issue #21875 has been reported by Earlopain (Earlopain _). ---------------------------------------- Feature #21875: Handling of trailing commas in lambda parameters https://bugs.ruby-lang.org/issues/21875 * Author: Earlopain (Earlopain _) * Status: Open ---------------------------------------- https://bugs.ruby-lang.org/issues/19107 was accepted, which is about trailing commands in method definitions. lambdas were not explicitly mentioned but I wanted to confirm how they should behave with a trailing comma. Or if a trailing comma should even be accepted for them. It's not clear to me since lambdas sometimes behave like blocks and sometimes more like methods. `->(...) {}` for example is syntax invalid (same as in blocks) but they do check their arity with blocks don't do. If a trailing comma is accepted it can either * be implicit splat like in `foo do |bar,|; end` or `foo do |bar|; end`. It would also mean that the trailing comma is only allowed after a positional argument. * Just be ignored and be accepted in most places like for method definitions. The first option would be rather useless in regards to https://bugs.ruby-lang.org/issues/19107 when you just want to add the comma for cleaner diffs. But I guess for lambdas this happens very rarely anyways. -- https://bugs.ruby-lang.org/
4 4
0 0
[ruby-core:124887] [Ruby Bug#21926] Thread#value on popen3 wait thread hangs in finalizer
by stevecrozz (Stephen Crosby) 05 Mar '26

05 Mar '26
Issue #21926 has been reported by stevecrozz (Stephen Crosby). ---------------------------------------- Bug #21926: Thread#value on popen3 wait thread hangs in finalizer https://bugs.ruby-lang.org/issues/21926 * Author: stevecrozz (Stephen Crosby) * Status: Open * ruby -v: 3.3.7 * Backport: 3.2: UNKNOWN, 3.3: UNKNOWN, 3.4: UNKNOWN, 4.0: UNKNOWN ---------------------------------------- Calling Thread#value on an Open3.popen3 wait thread from a finalizer completes in Ruby 3.2 but hangs in Ruby 3.3+. See repro.rb below. When the Ruby process hangs in these conditions, it no longer responds to signals and it seems to be unable to run any other threads. This affects the schmooze gem (and potentially other code using Open3.popen3 with finalizers), causing test suites to hang intermittently. ``` ruby # repro.rb require 'open3' class ProcessWrapper def initialize @stdin, @stdout, @stderr, @wait_thread = Open3.popen3("cat") ObjectSpace.define_finalizer(self, self.class.make_finalizer(@stdin, @stdout, @stderr, @wait_thread)) end def self.make_finalizer(stdin, stdout, stderr, wait_thread) proc do stdin.close rescue nil stdout.close rescue nil stderr.close rescue nil wait_thread.value # Hangs here in Ruby 3.3+ end end end 100.times { ProcessWrapper.new } GC.stress = true 1000.times { Object.new } puts "done" ``` ## Environment - Linux x86_64 - Tested on Ruby 3.2.7, 3.3.7, 3.4.8 -- https://bugs.ruby-lang.org/
2 1
0 0
[ruby-core:123672] [Ruby Feature#21665] Revisit Object#deep_freeze to support non-Ractor use cases
by headius (Charles Nutter) 05 Mar '26

05 Mar '26
Issue #21665 has been reported by headius (Charles Nutter). ---------------------------------------- Feature #21665: Revisit Object#deep_freeze to support non-Ractor use cases https://bugs.ruby-lang.org/issues/21665 * Author: headius (Charles Nutter) * Status: Open ---------------------------------------- ## Proposal: Introduce `Object#deep_freeze` (or similar name) to freeze an entire object graph I would like to re-propose the addition of Object#deep_freeze as a way to explicitly freeze an entire object graph. This proposal was rejected some years ago after being brought up in https://bugs.ruby-lang.org/issues/17145. The proposal was rejected in favor of making Ractor-specific methods like Ractor.make_shareable. There are a number of reasons why I believe `deep_freeze` is still an important addition: * Rubyists have been requesting a way to deep freeze an object graph for many years (decades?), far longer than Ractor has existed. * Immutable objects are the safest way to safe concurrency, with or without parallel threading or Ractor. * In fact, deep freezing has utility *completely unrelated to concurrency*, such as to guarantee that a large graph of objects will not be modified in the future. * In the absence of `deep_freeze`, users have been forced to implement the behavior themselves, rely on third-party libraries, or call `Ractor.make_shareable` even if they never intend to use Ractor. * The existing `Ractor.make_shareable` primarily does a deep freeze internally. Given the steady move toward making immutability the norm in Ruby, it seems clear to me that deep freezing is a feature that is long overdue. ## Revisiting arguments for rejecting `deep_freeze`: A number of reasons were given in #17145 for preferring the `Ractor.make_shareable` method and rejecting `deep_freeze`. I address those here: @ko1: > One concern about the name "freeze" is, what happens on shareable objects on Ractors. > For example, Ractor objects are shareable and they don't need to freeze to send beyond Ractor boundary. As mentioned above, deep freezing has utility completely separate from Ractors and concurrency. It is a frequently-requested and very useful feature to add. I think we should treat this as a standalone feature, and treat enhancements for Ractors as a separate concern. @ko1: > I also want to introduce Mutable but shareable objects using STM (or something similar) writing protocol (shareable Hash). What happens on deep_freeze? Five years later, I believe this has not yet happened. A potential future optimization for Ractor should not be justification to reject a useful feature today. If users implement their code using primarily immutable objects now, it's unlikely that they will want those same objects to be mutable in the future (this applies to deep freezing as well as `make_shareable`). @eregon: > A dynamic call to freeze causes extra calls, and needs checks that it was indeed frozen. > So for efficiency I think it would be better to mark as frozen internally without a call to freeze on every value. I agree with the concerns about dynamic calls to freeze and overridden versions of the method. It may make more sense to implement this as a utility method, like `Object.deep_freeze(obj)` (a non-overridable class utility method). This is essentially what has been implemented within `Ractor.make_shareable` today. @ko1: > Maybe the author don't want to care about Ractor. > The author want to declare "I don't touch it". So "deep_freeze" is better. This was actually given as a justification for a `deep_freeze` method versus something like `Object#to_shareable`, and yet what we ended up with was a method that requires users know about Ractor. I believe there should be a `deep_freeze` method that has nothing to do with Ractor. And users on JRuby and TruffleRuby already can get full parallelism today without Ractor. They do not care about Ractor, but they definitely care about deep freezing. @eregon: > I don't like anything with "ractor" in the name, that becomes not descriptive of what it does and IMHO looks weird for e.g. gems not specifically caring about Ractor. This is a large part of my justification for revisiting this proposal. Users should not have to care about or want to use Ractor just so they can deep freeze an object graph, because it has utility far beyond Ractor. @ko1: > I implemented Object#deep_freeze(skip_shareable: false) for trial. > https://github.com/ko1/ruby/pull/new/deep_freeze There's already a prototype of this, though I suspect this logic essentially became `Ractor.make_shareable` in the end. I believe it would be acceptable to implement `Ractor.make_shareable` by calling `deep_freeze` since there's largely no difference in visible behavior (other than Ractor-specific optimizations like marking a whole graph as shareable). @eregon: > How about first having deep_freeze that just freezes everything (except an object's class)? This is a good proposal. I believe it is what 99% of users currently calling `make_shareable` actually want, and again there's utility well beyond Ractor and concurrency scenarios. @eregon: > So we could mark as deeply frozen first, and remember to undo that if we cannot freeze some object. > However, is there any object that cannot be frozen? I would think not. The majority of uses of `make_shareable` I have seen are called exactly once on a graph of objects. It does not seem to be typical to repeatedly call `make_shareable`. I understand the desire to have a `shareable` bit for Ractor optimization, but that is a *separate feature* from deep freezing an object graph. There are many cases where we will only call `deep_freeze` once to ensure a graph is fully frozen before publishing it for other code to see, and most of these cases will not try to re-deep-freeze that graph. Ractor's need to "double-check" shareability is orthogonal to the discussion about deep freezing and should not be justification for rejecting `deep_freeze`. @eregon brought up concerns about not calling the custom `freeze` method on user types, since they may want to eagerly cache some data. I believe that discussion is out of scope. `deep_freeze` would be defined to only free the objects that are directly walkable from a root object, and only setting frozen bits. A new overridable method could be introduced that `deep_freeze` would call if present, but otherwise it should just do fast-path object freeze flag setting. @marcandre: > Looking at def freeze in the top ~400 gems, I found 64 in sequel gem alone, and 28 definitions in the rest 😅. This comment provides a breakdown of custom `freeze` methods and the reasons they are implemented. Again, I believe this is out of scope for the discussion at hand. Forcing objects to "prepare for deep freezing" is a separate consideration that will be very library-specific, since every library may want to prepare in a different way. But they *all* want the ability to recursively mark objects as frozen, which is a runtime-level feature. @ko1: > We discussed about the name "deep_freeze", and Matz said deep_freeze should be only for freezing, not related to Ractor. So classes/module should be frozen if [C].deep_freeze. This is why I proposed a Object#deep_freeze(skip_shareable: true) and Ractor.make_shareable(obj). Avoiding classes and modules when deep freezing seems like a reasonable option to me. Naming could make this behavior clear, but again I believe 99% of users just want a plain old object `deep_freeze`. And this is again conflating two separate concerns: * deep freezing * marking an entire graph as shareable These are – and should be – two separate features. The deep freezing feature should not depend on setting shareability bits, since shareability is only meaningful in the context of Ractors. @ko1: > So naming issue is reamained? > > Object#deep_freeze (matz doesn't like it) > Object#deep_freeze(skip_sharable: true) (I don't know how Matz feel. And it is difficult to define Class/Module/... on skip_sharable: false) > Ractor.make_shareable(obj) (clear for me, but it is a bit long) > Ractor.shareable!(obj) (shorter. is it clear?) > Object#shareable! (is it acceptable?) > ... other ideas? I outline some alternatives below. ## Alternative forms: @matz didn't like `deep_freeze` five years ago. How do you feel about it now, @matz? Some alternatives with justification: * Object.deep_freeze(obj) This would make sense to avoid users being able to override the `deep_freeze` behavior, and would make it feel more like a global utility method with special behavior. * Object#freeze(obj, deep: true) * Object#freeze(obj, recursive: true) These work within the existing `freeze` method and still convey intent, but may break APIs that don't expect to receive keyword arguments. And there are some alternative names, which may work as either instance methods or class methods: * `freeze_recursive` * `freeze_all` * `freeze!` * `freeze_reachable_objects` (long but a variation of this might address concerns about not freezing classes and modules) -- https://bugs.ruby-lang.org/
6 7
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • ...
  • 12
  • Older →

HyperKitty Powered by HyperKitty version 1.3.12.