
Issue #20425 has been updated by tenderlovemaking (Aaron Patterson). ko1 (Koichi Sasada) wrote in #note-6:
My idea is simple because it is simple replacement with an array (and a hash) to contain arguments (I only proposed lightweight argument container than an array and hash).
This proposal breaks the assumption of VM stack structure. I'm afraid this kind of breakage can cause serious issue.
For what it's worth, we've tested this patch in Shopify CI and it's passing all tests. We might be able to try in production, but I need to ask some people.
But I can misunderstand so let's talk at RubyKaigi, Okinawa with a whiteboard.
Sure, we can discuss it at RubyKaigi. I agree your proposal would maintain stack layout when calling in to `...` methods, but I don't think the code would be any more simple due to extra memory management / GC complexity. I was able to simplify the patch somewhat, so please take a look again. I decided to test this against RailsBench, and this patch does speed up RailsBench (slightly). Here is RailsBench with master: ``` $ bundle exec ruby benchmark.rb ruby 3.4.0dev (2024-04-18T21:11:25Z master 64d0817ea9) [arm64-darwin23] Command: bundle check 2> /dev/null || bundle install The Gemfile's dependencies are satisfied Command: bin/rails db:migrate db:seed Using 100 posts in the database itr #1: 1554ms itr #2: 1519ms itr #3: 1515ms itr #4: 1553ms itr #5: 1550ms itr #6: 1526ms itr #7: 1574ms itr #8: 1522ms itr #9: 1521ms itr #10: 1529ms itr #11: 1526ms itr #12: 1550ms itr #13: 1522ms itr #14: 1551ms itr #15: 1541ms itr #16: 1538ms itr #17: 1552ms itr #18: 1536ms itr #19: 1560ms itr #20: 1549ms itr #21: 1536ms itr #22: 1529ms itr #23: 1542ms itr #24: 1502ms itr #25: 1559ms RSS: 139.1MiB MAXRSS: 142640.0MiB Writing file /Users/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-143710.json Average of last 10, non-warmup iters: 1540ms ``` Here is RailsBench with the `...` optimization: ``` $ bundle exec ruby benchmark.rb ruby 3.4.0dev (2024-04-18T21:20:23Z speed-forward 4d698e6d46) [arm64-darwin23] Command: bundle check 2> /dev/null || bundle install The Gemfile's dependencies are satisfied Command: bin/rails db:migrate db:seed Using 100 posts in the database itr #1: 1537ms itr #2: 1523ms itr #3: 1495ms itr #4: 1501ms itr #5: 1520ms itr #6: 1514ms itr #7: 1514ms itr #8: 1486ms itr #9: 1524ms itr #10: 1493ms itr #11: 1472ms itr #12: 1509ms itr #13: 1497ms itr #14: 1492ms itr #15: 1500ms itr #16: 1507ms itr #17: 1526ms itr #18: 1502ms itr #19: 1505ms itr #20: 1492ms itr #21: 1501ms itr #22: 1529ms itr #23: 1519ms itr #24: 1537ms itr #25: 1499ms RSS: 140.0MiB MAXRSS: 143504.0MiB Writing file /Users/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-143623.json Average of last 10, non-warmup iters: 1512ms ``` Average iteration decreases about 28ms. Basically similar results on my x86 machine. master: ``` aaron@whiteclaw ~/g/y/b/railsbench (main)> bundle exec ruby benchmark.rb ruby 3.4.0dev (2024-04-18T21:21:01Z master 6443d690ae) [x86_64-linux] Command: bundle check 2> /dev/null || bundle install The Gemfile's dependencies are satisfied Command: bin/rails db:migrate db:seed Using 100 posts in the database itr #1: 2227ms itr #2: 2173ms itr #3: 2174ms itr #4: 2171ms itr #5: 2177ms itr #6: 2171ms itr #7: 2172ms itr #8: 2171ms itr #9: 2170ms itr #10: 2173ms itr #11: 2170ms itr #12: 2173ms itr #13: 2170ms itr #14: 2171ms itr #15: 2174ms itr #16: 2171ms itr #17: 2173ms itr #18: 2170ms itr #19: 2176ms itr #20: 2169ms itr #21: 2175ms itr #22: 2169ms itr #23: 2170ms itr #24: 2173ms itr #25: 2170ms RSS: 110.0MiB MAXRSS: 110.1MiB Writing file /home/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-150418.json Average of last 10, non-warmup iters: 2171ms ``` This branch: ``` aaron@whiteclaw ~/g/y/b/railsbench (main)> bundle exec ruby benchmark.rb ruby 3.4.0dev (2024-04-18T21:20:23Z speed-forward 4d698e6d46) [x86_64-linux] Command: bundle check 2> /dev/null || bundle install The Gemfile's dependencies are satisfied Command: bin/rails db:migrate db:seed Using 100 posts in the database itr #1: 2199ms itr #2: 2157ms itr #3: 2158ms itr #4: 2153ms itr #5: 2156ms itr #6: 2157ms itr #7: 2155ms itr #8: 2153ms itr #9: 2152ms itr #10: 2160ms itr #11: 2153ms itr #12: 2156ms itr #13: 2153ms itr #14: 2159ms itr #15: 2154ms itr #16: 2154ms itr #17: 2157ms itr #18: 2155ms itr #19: 2158ms itr #20: 2152ms itr #21: 2156ms itr #22: 2154ms itr #23: 2153ms itr #24: 2156ms itr #25: 2151ms RSS: 107.7MiB MAXRSS: 107.8MiB Writing file /home/aaron/git/yjit-bench/benchmarks/railsbench/data/results-ruby-3.4.0-2024-04-18-150520.json Average of last 10, non-warmup iters: 2154ms ``` Maybe we could try merging this? We can revert if it causes problems. Anyway, I'm happy to discuss in Okinawa! 😄 ---------------------------------------- Feature #20425: Optimize forwarding callers and callees https://bugs.ruby-lang.org/issues/20425#change-108010 * Author: tenderlovemaking (Aaron Patterson) * Status: Open ---------------------------------------- [This PR](https://github.com/ruby/ruby/pull/10510) optimizes forwarding callers and callees. It only optimizes methods that only take `...` as their parameter, and then pass `...` to other calls. Calls it optimizes look like this: ```ruby def bar(a) = a def foo(...) = bar(...) # optimized foo(123) ``` ```ruby def bar(a) = a def foo(...) = bar(1, 2, ...) # optimized foo(123) ``` ```ruby def bar(*a) = a def foo(...) list = [1, 2] bar(*list, ...) # optimized end foo(123) ``` All variants of the above but using `super` are also optimized, including a bare super like this: ```ruby def foo(...) super end ``` This patch eliminates intermediate allocations made when calling methods that accept `...`. We can observe allocation elimination like this: ```ruby def m x = GC.stat(:total_allocated_objects) yield GC.stat(:total_allocated_objects) - x end def bar(a) = a def foo(...) = bar(...) def test m { foo(123) } end test p test # allocates 1 object on master, but 0 objects with this patch ``` ```ruby def bar(a, b:) = a + b def foo(...) = bar(...) def test m { foo(1, b: 2) } end test p test # allocates 2 objects on master, but 0 objects with this patch ``` ## How does it work? This patch works by using a dynamic stack size when passing forwarded parameters to callees. The caller's info object (known as the "CI") contains the stack size of the parameters, so we pass the CI object itself as a parameter to the callee. When forwarding parameters, the forwarding ISeq uses the caller's CI to determine how much stack to copy, then copies the caller's stack before calling the callee. The CI at the forwarded call site is adjusted using information from the caller's CI. I think this description is kind of confusing, so let's walk through an example with code. ```ruby def delegatee(a, b) = a + b def delegator(...) delegatee(...) # CI2 (FORWARDING) end def caller delegator(1, 2) # CI1 (argc: 2) end ``` Before we call the delegator method, the stack looks like this: ``` Executing Line | Code | Stack ---------------+---------------------------------------+-------- 1| def delegatee(a, b) = a + b | self 2| | 1 3| def delegator(...) | 2 4| # | 5| delegatee(...) # CI2 (FORWARDING) | 6| end | 7| | 8| def caller | -> 9| delegator(1, 2) # CI1 (argc: 2) | 10| end | ``` The ISeq for `delegator` is tagged as "forwardable", so when `caller` calls in to `delegator`, it writes `CI1` on to the stack as a local variable for the `delegator` method. The `delegator` method has a special local called `...` that holds the caller's CI object. Here is the ISeq disasm fo `delegator`: ``` == disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)> local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1]) [ 1] "..."@0 0000 putself ( 1)[LiCa] 0001 getlocal_WC_0 "..."@0 0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil 0006 leave [Re] ``` The local called `...` will contain the caller's CI: CI1. Here is the stack when we enter `delegator`: ``` Executing Line | Code | Stack ---------------+---------------------------------------+-------- 1| def delegatee(a, b) = a + b | self 2| | 1 3| def delegator(...) | 2 -> 4| # | CI1 (argc: 2) 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me 6| end | specval 7| | type 8| def caller | 9| delegator(1, 2) # CI1 (argc: 2) | 10| end | ``` The CI at `delegatee` on line 5 is tagged as "FORWARDING", so it knows to memcopy the caller's stack before calling `delegatee`. In this case, it will memcopy self, 1, and 2 to the stack before calling `delegatee`. It knows how much memory to copy from the caller because `CI1` contains stack size information (argc: 2). Before executing the `send` instruction, we push `...` on the stack. The `send` instruction pops `...`, and because it is tagged with `FORWARDING`, it knows to memcopy (using the information in the CI it just popped): ``` == disasm: #<ISeq:delegator@-e:1 (1,0)-(1,39)> local table (size: 1, argc: 0 [opts: 0, rest: -1, post: 0, block: -1, kw: -1@-1, kwrest: -1]) [ 1] "..."@0 0000 putself ( 1)[LiCa] 0001 getlocal_WC_0 "..."@0 0003 send <calldata!mid:delegatee, argc:0, FCALL|FORWARDING>, nil 0006 leave [Re] ``` Instruction 001 puts the caller's CI on the stack. `send` is tagged with FORWARDING, so it reads the CI and _copies_ the callers stack to this stack: ``` Executing Line | Code | Stack ---------------+---------------------------------------+-------- 1| def delegatee(a, b) = a + b | self 2| | 1 3| def delegator(...) | 2 4| # | CI1 (argc: 2) -> 5| delegatee(...) # CI2 (FORWARDING) | cref_or_me 6| end | specval 7| | type 8| def caller | self 9| delegator(1, 2) # CI1 (argc: 2) | 1 10| end | 2 ``` The "FORWARDING" call site combines information from CI1 with CI2 in order to support passing other values in addition to the `...` value, as well as perfectly forward splat args, kwargs, etc. Since we're able to copy the stack from `caller` in to `delegator`'s stack, we can avoid allocating objects. ## Why? I want to do this to eliminate object allocations for delegate methods. My long term goal is to implement `Class#new` in Ruby and it uses `...`. I was able to implement `Class#new` in Ruby [here](https://github.com/ruby/ruby/pull/9289). If we adopt the technique in this patch, then we can optimize allocating objects that take keyword parameters for `initialize`. For example, this code will allocate 2 objects: one for `SomeObject`, and one for the kwargs: ```ruby SomeObject.new(foo: 1) ``` If we combine this technique, plus implement `Class#new` in Ruby, then we can reduce allocations for this common operation. -- https://bugs.ruby-lang.org/