-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
{:EXIT, #PID<0.2945.0>, :normal} #79
Comments
G'day! I've tried it out, but despite elixir-lang/elixir#5554 |
Hey!! Sorry, I couldn't look into it yet, I'll try to check it out this week. On the other hand, out of curiosity, have you tried with the partitioned adapter |
Not yet, no, but your hunch strikes me as reasonable. I've figured out most of our recent trouble was coming from Next worst was Together, I think those changes mitigate most of our performance impact from this issue. We still need extra |
Right, I think it could be related to In the meantime, I'd suggest, why if you run the same tests but with the master branch (which is now in v2). The cache something like: defmodule MyApp.ReplicatedCache do
use Nebulex.Cache,
otp_app: :my_app,
adapter: Nebulex.Adapters.Replicated
end And config for example: config :my_app, MyApp.ReplicatedCache,
primary: [
gc_interval: :timer.seconds(3600),
backend: :ets
] As you notice, we can test the adapter but using the backend Thanks!! |
@garthk I have updated Nebulex to use the latest version of |
Looking good! I started those tests off in 1.2.2, made sure they failed, switched to |
G'day!
Replicated cache operations when trapping exits results in exit messages for processes you didn't start yourself, in turn causing
FunctionClauseError
crashes and other misbehaviour.I've confirmed with
:dbg
those processes were started by the cache's task supervisor:… and by modifying the code that it was
Nebulex.RPC.multi_call/3
in particular:nebulex/lib/nebulex/rpc.ex
Lines 90 to 97 in 38c2b4d
If you switch to
Task.Supervisor.async_nolink/4
you'll not cause those messages. I think your use ofTask.yield_many/2
below should still catch them exiting?The text was updated successfully, but these errors were encountered: