tatersolid 6 years ago

Doesn’t 42 bits of address space seem terribly shortsighted? I know most current x64 hardware can only address 48 bits of physical RAM, but at $dayjob we already own servers with 2TB of RAM.

Baking a 4TB limit into new GC code seems... unwise.

gok 6 years ago

Curious how this compares performance-wise with Azul’s approach.

  • nwmcsween 6 years ago

    Azul's should be faster as they use x86_64 PCID.

jerven 6 years ago

Interesting enough it's the second public JVM project that I know off funded by Oracle that is not supporting solaris or sparc (anymore).

(Graal work for SPARC seemed to have stopped as well)

jcdavis 6 years ago

So interestingly, they are storing extra metadata in the object pointers, which means no more compressed oops (ie 32 bit pointers). Curious what the effect of that on heap sizes is considering most JVMs run with <32gb heaps

  • andrewguy9 6 years ago

    I think other people do this as well. As I recall, windows encodes r/w/execute memory permission into a mask at the top of the pointer so that they avoid a table look up when they take a fault.

  • chrisseaton 6 years ago

    I think if your heap is less than 32GB then this GC isn't for you in the first place - it's designed for really big heaps.

    • monocasa 6 years ago

      Is that true? Azul's similar GC was designed for large heaps, but it worked well with smaller heaps as well AFAIK.

      • chrisseaton 6 years ago

        The design documents say it's 'optimised for very large heaps'. 32GB isn't 'very large' these days.

        • ysleepy 6 years ago

          It says: "Goals: Multi-terabyte heaps"

          • chrisseaton 6 years ago

            I don’t understand do you think that contradicts me?

            • klez 6 years ago

              I think ysleepy was agreeing with you and adding a data point to the discussion.

uluyol 6 years ago

This seems to be the same as what Red Hat is working on with Shenandoah. I don't understand how the goals differ and from a 1000 ft, incomplete view, the basic design seem similar too.

Is there anyone who can clarify?

  • needusername 6 years ago

    From what I understand the goals are similar but the approaches are different.

    - Shenandoah uses Brooks-style forward pointers whereas ZGC uses colored pointers and off-heap forwarding tables.

    - Shenandoah could in theory run on Windows, AFAIK this is a non-goal for ZGC.

    - Shenandoah tries to return unused heap to the OS wheres currently the recommendation for ZGC seems to be -Xms == -Xmx. In addition ZGC tripple maps the heap which can lead to interesting challenges in resource usage accounting.

    - None of them are generational although Shenandoah allows for custom policies.

    - Both of them seem to disable biased locking by default. I would guess the latency for deoptimizing is simply too large.

    - Shenandoah supports pinning objects in JNI criticals without disabling the GC.

    - Somewhat unsurprisingly ZGC introduces several HotSpot latency optimizations that will also benefit Shenandoah (thread-local handshakes, concurrent reference processing, ...).