Issue #19395 has been reported by luke-gru (Luke Gruber).
----------------------------------------
Bug #19395: Process forking within non-main Ractor creates child stuck in busy loop
https://bugs.ruby-lang.org/issues/19395
* Author: luke-gru (Luke Gruber)
* Status: Open
* Priority: Normal
* Backport: 2.7: UNKNOWN, 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
```ruby
def test_fork_in_ractor
r2 = Ractor.new do
pid = fork do
exit Ractor.count
end
pid
end
pid = r2.take
puts "Process #{Process.pid} waiting for #{pid}"
_pid, status = Process.waitpid2(pid) # stuck forever
if status.exitstatus != 1
raise "status is #{status.exitstatus}"
end
end
test_fork_in_ractor()
```
$ top # shows CPU usage is high for child process
--
https://bugs.ruby-lang.org/
Issue #19408 has been reported by luke-gru (Luke Gruber).
----------------------------------------
Bug #19408: Object no longer frozen after moved from a ractor
https://bugs.ruby-lang.org/issues/19408
* Author: luke-gru (Luke Gruber)
* Status: Open
* Priority: Normal
* Backport: 2.7: UNKNOWN, 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
I think frozen objects should still be frozen after a move.
```ruby
r = Ractor.new do
obj = receive
p obj.frozen? # should be true but is false
p obj
end
obj = [Object.new].freeze
r.send(obj, move: true)
r.take
```
--
https://bugs.ruby-lang.org/
Issue #19905 has been reported by hi(a)joaofernandes.me (Joao Fernandes).
----------------------------------------
Feature #19905: Introduce `Queue#peek`
https://bugs.ruby-lang.org/issues/19905
* Author: hi(a)joaofernandes.me (Joao Fernandes)
* Status: Open
* Priority: Normal
----------------------------------------
This ticket proposes the introduction of the `Queue#peek` method, similar to what we can find in other object oriented languages such as Java and C#. This method is similar to `Queue#pop`, but does not change the data, nor does it require a lock.
```
q = Queue.new([1,2,3])
=> #<Thread::Queue:0x00000001065d7148>
q.peek
=> 1
q.peek
=> 1
```
I have felt the need of this for debugging, but I think that it can also be of practical use for presentation. I believe that the only drawback could be that newcomers could misuse it in multi-threaded work without taking into account that this method is not thread safe.
I also volunteer myself to implement this method.
--
https://bugs.ruby-lang.org/
Issue #19542 has been reported by hanazuki (Kasumi Hanazuki).
----------------------------------------
Bug #19542: Operations on zero-sized IO::Buffer are raising
https://bugs.ruby-lang.org/issues/19542
* Author: hanazuki (Kasumi Hanazuki)
* Status: Open
* Priority: Normal
* ruby -v: ruby 3.2.1 (2023-02-08 revision 31819e82c8) [x86_64-linux]
* Backport: 2.7: UNKNOWN, 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
I found that IO::Buffer of zero length is not cloneable.
```
% ruby -v
ruby 3.2.1 (2023-02-08 revision 31819e82c8) [x86_64-linux]
% ruby -e 'p IO::Buffer.for("").dup'
-e:1:in `initialize_copy': The buffer is not allocated! (IO::Buffer::AllocationError)
from -e:1:in `initialize_dup'
from -e:1:in `dup'
from -e:1:in `<main>'
% ruby -e 'p IO::Buffer.new(0).dup'
-e:1: warning: IO::Buffer is experimental and both the Ruby and C interface may change in the future!
-e:1:in `initialize_copy': The buffer is not allocated! (IO::Buffer::AllocationError)
from -e:1:in `initialize_dup'
from -e:1:in `dup'
from -e:1:in `<main>'
```
It seems `IO::Buffer.new(0)` allocates no memory for buffer on object creation and thus prohibits reading from or writing to it. So `#dup` method copying zero bytes into the new IO::Buffer raises the exception.
Empty buffers, however, often appear in corner cases of usual operations (encrypting an empty string, encoding an empty list of items into binary, etc.) and it would be easy if such cases could be handled consistently.
Other operations on NULL IO::Buffers are also useful but currently raising.
```
IO::Buffer.new(0) <=> IO::Buffer.new(1)
IO::Buffer.new(0).each(:U8).to_a
IO::Buffer.new(0).get_values([], 0)
IO::Buffer.new(0).set_values([], 0, [])
```
I'm not sure this is a bug or by design, but at least I don't want cloning and comparison to raise.
--
https://bugs.ruby-lang.org/
Issue #19461 has been reported by ioquatix (Samuel Williams).
----------------------------------------
Bug #19461: Time.local performance tanks in forked process (on macOS only?)
https://bugs.ruby-lang.org/issues/19461
* Author: ioquatix (Samuel Williams)
* Status: Open
* Priority: Normal
* Backport: 2.7: UNKNOWN, 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
The following program demonstrates a performance regression in forked child processes when invoking `Time.local`:
```ruby
require 'benchmark'
require 'time'
def sir_local_alot
result = Benchmark.measure do
10_000.times do
tm = ::Time.local(2023)
end
end
$stderr.puts result
end
sir_local_alot
pid = fork do
sir_local_alot
end
Process.wait(pid)
```
On Linux the performance is similar, but on macOS, the performance is over 100x worse on my M1 laptop.
--
https://bugs.ruby-lang.org/
Issue #19758 has been reported by MyCo (Maik Menz).
----------------------------------------
Misc #19758: Statically link ext/json
https://bugs.ruby-lang.org/issues/19758
* Author: MyCo (Maik Menz)
* Status: Open
* Priority: Normal
----------------------------------------
Hi,
I'm building Ruby both as dynamic and static library with MSVC for a project. Everything appears to work fine, but now I'm trying to use the json ext, and it only works with the dynamically linked version.
In the statically linked version it says it's missing json/pure but on closer inspection the reason it says that is because it can't find json/ext/parser (and probably also json/ext/generator) in the first place.
I can see that both parser & generator created static libs in the build directory but they aren't linked into the ruby lib.
With my limited knowledge of the building processes, my first attempt was to add both of those libs into `LOCAL_LIBS` and now they apear in the linking process.
But this still doesn't change anything. It's still not finding those 2 libs in the statically linked Ruby build.
What am I missing? What do I have to do, to get those linked into the static lib?
Regards
Maik
--
https://bugs.ruby-lang.org/
Issue #19246 has been reported by thomthom (Thomas Thomassen).
----------------------------------------
Bug #19246: Rebuilding the loaded feature index much slower in Ruby 3.1
https://bugs.ruby-lang.org/issues/19246
* Author: thomthom (Thomas Thomassen)
* Status: Open
* Priority: Normal
* Backport: 2.7: UNKNOWN, 3.0: UNKNOWN, 3.1: UNKNOWN
----------------------------------------
Some background to this issue: (This is a case that is unconventional usage of Ruby, but I hope you bear with me.)
We ship the Ruby interpreter with our desktop applications for plugin support in our application (SketchUp).
One feature we have had since, at least 2006 (maybe earlier-hard to track history beyond that) is that we had a custom alternate `require` method: `Sketchup.require`. This allows the users of our API to load encrypted Ruby files.
This originally used `rb_provide` to add the path to the encrypted file into the list of loaded feature. However, somewhere between Ruby 2.2 and 2.5 there was some string optimisations made and the function `rb_provide` would not use a copy of the string passed to it. Instead it just held on to a pointer reference. In our case that string came from user-land, being passed in from `Sketchup.require` and would eventually be garbage collected and cause access violation crashes.
To work around that we changed our custom `Sketchup.require` to push to `$LOADED_FEATURES` directly. There was a small penalty to the index being rebuilt after that, but it was negligible.
Recently we tried to upgrade the Ruby interpreter in our application from 2.7 to 3.1 and found a major performance reduction when using our `Sketchup.require. As in, a plugin that would load in half a second would now spend 30 seconds.
From https://bugs.ruby-lang.org/issues/18452 it sounds like there is _some_ expected extra penalty due to changes in how the index is built. But should it really be this much?
Example minimal repro to simulate the issue:
```
# frozen_string_literal: true
require 'benchmark'
iterations = 200
foo_files = iterations.times.map { |i| "#{__dir__}/tmp/foo-#{i}.rb" }
foo_files.each { |f| File.write(f, "") }
bar_files = iterations.times.map { |i| "#{__dir__}/tmp/bar-#{i}.rb" }
bar_files.each { |f| File.write(f, "") }
biz_files = iterations.times.map { |i| "#{__dir__}/tmp/biz-#{i}.rb" }
biz_files.each { |f| File.write(f, "") }
Benchmark.bm do |x|
x.report('normal') {
foo_files.each { |file|
require file
}
}
x.report('loaded_features') {
foo_files.each { |file|
require file
$LOADED_FEATURES << "#{file}-fake.rb"
}
}
x.report('normal again') {
biz_files.each { |file|
require file
}
}
end
```
```
C:\Users\Thomas\SourceTree\ruby-perf>ruby27.bat
ruby 2.7.4p191 (2021-07-07 revision a21a3b7d23) [x64-mingw32]
C:\Users\Thomas\SourceTree\ruby-perf>ruby test-require.rb
user system total real
normal 0.000000 0.031000 0.031000 ( 0.078483)
loaded_features 0.015000 0.000000 0.015000 ( 0.038759)
normal again 0.016000 0.032000 0.048000 ( 0.076940)
```
```
C:\Users\Thomas\SourceTree\ruby-perf>ruby30.bat
ruby 2.7.4p191 (2021-07-07 revision a21a3b7d23) [x64-mingw32]
C:\Users\Thomas\SourceTree\ruby-perf>ruby test-require.rb
user system total real
normal 0.000000 0.031000 0.031000 ( 0.074733)
loaded_features 0.032000 0.000000 0.032000 ( 0.038898)
normal again 0.000000 0.047000 0.047000 ( 0.076343)
```
```
C:\Users\Thomas\SourceTree\ruby-perf>ruby31.bat
ruby 3.1.2p20 (2022-04-12 revision 4491bb740a) [x64-mingw-ucrt]
C:\Users\Thomas\SourceTree\ruby-perf>ruby test-require.rb
user system total real
normal 0.016000 0.031000 0.047000 ( 0.132633)
loaded_features 1.969000 11.500000 13.469000 ( 18.395761)
normal again 0.031000 0.125000 0.156000 ( 0.249130)
```
Right now we're exploring options to deal with this. Because the performance degradation is a blocker for us upgrading. We also have 16 years of plugins created by third party developer that makes it impossible for us to drop this feature.
Some options as-is, none of which are ideal:
1. We revert to using `rb_provide` but ensure the string passed in is not owned by Ruby, instead building a list of strings that we keep around for the duration of the application process. The problem is that some of our plugin developers have on occasion released plugins that will touch `$LOADED_FEATURES`, and if such a plugin is installed on a user machine it might cause the application to become unresponsive for minutes. The other non-ideal issue with using `rb_provide` is that we're also using that in ways it wasn't really intended (from that I understand). And it's not an official API?
2. We create a separate way for our `Sketchup.require` to keep track of it's loaded features, but then that would diverge even more from the behaviour of `require`. Replicating `require` functionality is not trivial and would be prone to subtle errors and possible diverge. It also doesn't address our issue that there is code out there in existing plugins that touches `$LOADED_FEATURES`. (And it's not something we can just ask people to clean up. From previous experience old versions stick around for a long time and is very hard to purge from circulation.)
I have two questions for the Ruby mantainers:
1. Would it be reasonable to see an API for adding/removing/checking `$LOADED_FEATURE` that would allow for a more ideal implementation of a custom `require` functionality?
2. Is the performance difference in rebuilding the loaded feature index really expected to be as high as what we're seeing? An increase of nearly 100 times? Is there something there that might be addressed to make the rebuild to be less expensive against? (This would really help to address our challenges with third party plugins occasionally touching the global.)
--
https://bugs.ruby-lang.org/
Issue #19857 has been reported by ioquatix (Samuel Williams).
----------------------------------------
Bug #19857: Eval coverage is reset after each `eval`.
https://bugs.ruby-lang.org/issues/19857
* Author: ioquatix (Samuel Williams)
* Status: Open
* Priority: Normal
* Assignee: ioquatix (Samuel Williams)
* Backport: 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
It seems like `eval` based coverage is reset every time eval is invoked.
```ruby
#!/usr/bin/env ruby
require 'coverage'
def measure(flag)
c = Class.new
c.class_eval(<<~RUBY, "foo.rb", 1)
def foo(flag)
if flag
puts "foo"
else
puts "bar"
end
end
RUBY
return c.new.foo(flag)
end
Coverage.start(lines: true, eval: true)
# Depending on the order of these two operations, different computation is calculated, because the evaluation of the code is considered different, even if the content/path is the same.
measure(false)
measure(true)
p Coverage.result
```
Further investigation is required.
--
https://bugs.ruby-lang.org/
Issue #19387 has been reported by luke-gru (Luke Gruber).
----------------------------------------
Bug #19387: Issue with ObjectSpace.each_objects not returning IO objects after starting a ractor
https://bugs.ruby-lang.org/issues/19387
* Author: luke-gru (Luke Gruber)
* Status: Open
* Priority: Normal
* Backport: 2.7: UNKNOWN, 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
```ruby
r = Ractor.new do
receive # block, the problem is not the termination of the ractor but the starting
end
ObjectSpace.each_object(IO) { |io|
p io # we get no objects
}
```
--
https://bugs.ruby-lang.org/
Issue #19717 has been reported by ioquatix (Samuel Williams).
----------------------------------------
Bug #19717: `ConditionVariable#signal` is not fair when the wakeup is consistently spurious.
https://bugs.ruby-lang.org/issues/19717
* Author: ioquatix (Samuel Williams)
* Status: Open
* Priority: Normal
* Backport: 3.0: UNKNOWN, 3.1: UNKNOWN, 3.2: UNKNOWN
----------------------------------------
For background, see this issue <https://github.com/socketry/async/issues/99>.
It looks like `ConditionVariable#signal` is not fair, if the calling thread immediately reacquires the resource.
I've given a detailed reproduction here as it's non-trivial: <https://github.com/ioquatix/ruby-condition-variable-timeout>.
Because the spurious wakeup occurs, the thread is pushed to the back of the waitq, which means any other waiting thread will acquire the resource, and that thread will perpetually be at the back of the queue.
I believe the solution is to change `ConditionVarialbe#signal` should only remove the thread from the waitq if it's possible to acquire the lock. Otherwise it should be left in place, so that the order is retained, this should result in fair scheduling.
--
https://bugs.ruby-lang.org/