It makes the code icky and hard to debug, and you can simply return new immutable objects for every state change.

EDIT: why not just create a new object and reassign variable to point to the new object

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    You can do exactly as you say, and you’re right - it makes code easier to reason about. However it all come down to efficiency. Copying a large data structure to modify one element in it is slow. So we deal with the ick of mutable data to preserve performance.

  • Dunstabzugshaubitze@feddit.org
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    multiple other objects might be holding a reference to the object you want to change, so you’d either have to recreate those too or mutate them to let them point to the new object.

    however if you can do what you want to do in a side effect free way i suggest doing that, as it is indeed easier to reason about what happens this way.

  • thenextguy@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Faster. Less memory. Maps to physical things well (e.g. a device with memory mapped registers). No garbage collection / object destruction needed. No need to initialize new objects all the time.

  • 🇨🇦 tunetardis@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    As others have pointed out, there is the issue of breaking references to objects.

    There can also be a lot of memory thrashing if you have to keep reallocating and copying objects all the time. To some extent, that may be mitigated using an internment scheme for common values. In Python, for example, integers are immutable but they intern something like the first 100 or so iirc? But that doesn’t work well for everything.

    Any container you want to populate dynamically should probably be mutable to avoid O(N²) nastiness.

  • DerArzt@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    For all the people in this thread talking about the inefficiencies of immutability, they may find this talk by Rich Hickey (the creator of clojure) interesting. Not so much as it shows that they’re wrong, but more so that it’s a good lecture explaining how we can build immutable data structures that address the limitations immutability in a way that reduces the overhead.

  • BehindTheBarrier@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Try making a list without copying every time you add something. Mutability matters then. Imagine copying 10000 elements, or copying 10000 references to items every time something were to be added or changed.

  • Traister101@lemmy.today
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    So your writing a game. This game has what I’m going to call “entities” which are the dynamic NPCs and such objects. So these objects are most easily conceptualized as mutable things. Why mutable? Well they move around, change states depending on game events ect. If this object is immutable you’d have to tie the in world representation to a new object, constantly just because it moved slightly or something else. This object is mutable not just because it’s easier to understand but there are even efficiency gains due to not needing to constantly create a new version just because it moved a little bit.

    In contrast the object which holds the position data (in this case we’ll have 3 doubles x, y, z) makes a lot of sense as an immutable object. This kind object is small making it cheap to replace (it’s just 3 doubles, so 3*64 bits or a total of 24 bytes) and it’s representing something that naturally makes sense as being immutable, it’s a set of 3 numbers.

    Now another comparison your typical dynamic array type container (this is your std::vector std::vec ArrayList and friends). These are mutable objects mainly due to efficiency (it’s expensive to copy the contents when adding new values) yet they also are easier to conceptualize when mutable. It’s an object containing a collection of stuff like a box, you can put things in, take stuff out but it’s still the same box, just it’s contents have changed. If these objects are immutable to put something into the box you must first create a brand new box, and create a copy of the old boxes contents, and then put your new item into the box. Every time. Sometimes this kind of thing makes sense but it’s certainly not a common situation.

    Some functional languages do have immutable data structures however in reality the compiler usually does some magic and ends up using a mutable type as it’s simply so much more efficient.

  • FizzyOrange@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    Yeah the main reason is performance. In some languages if you use a value “linearly” (i.e. there’s only ever one copy) then functional style updates can get transformed to mutable in-place updates under the hood, but usually it’s seen as a performance optimisation, whereas you often want a performance guarantee.

    Koka is kind of an exception, but even there they say:

    Note. FBIP is still active research. In particular we’d like to add ways to add annotations to ensure reuse is taking place.

    From that point of view it’s quite similar to tail recursion. It’s often viewed as an optional optimisation but often you want it to be guaranteed so some languages have a keyword like become to do that.

    Also it’s sometimes easier to write code that uses mutation. It doesn’t always make code icky and hard to debug. I’d say it’s more of a very mild code smell. A code musk, if you like.

  • JakenVeina@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m gonna hazard a guess, just cause I’m curious, that you’re coming from JavaScript.

    Regardless, the answer’s basically the same across all similar languages where this question makes sense. That is, languages that are largely, if not completely, object-oriented, where memory is managed for you.

    Bottom line, object allocation is VERY expensive. Generally, objects are allocated on a heap, so the allocation process itself, in its most basic form, involves walking some portion of a linked list to find an available heap block, updating a header or other info block to track that the block is now in use, maybe sub-dividing the block to avoid wasting space, any making any updates that might be necessary to nodes of the linked list that we traversed.

    THEN, we have to run similar operations later for de-allocation. And if we’re talking about a memory-managed language, well, that means running a garbage collector algorithm, periodically, that needs to somehow inspect blocks that are in use to see if they’re still in use, or can be automatically de-allocated. The most common garbage-collector I know of involves tagging all references within other objects, so that the GC can start at the “root” objects and walk the entire tree of references within references, in order to find any that are orphaned, and identify them as collectable.

    My bread and butter is C#, so let’s look at an actual example.

    public class MyMutableObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    }
    
    public record MyImmutableObject
    {
        public required ulong Id { get; init; }
    
        public required string Name { get; init; }
    }
    
    _immutableInstance = new()
    {
        Id      = 1,
        Name    = "First"
    };
    
    _mutableInstance = new()
    {
        Id      = 1,
        Name    = "First"
    };
    
    [Benchmark(Baseline = true)]
    public MyMutableObject MutableEdit()
    {
        _mutableInstance.Name = "Second";
    
        return _mutableInstance;
    }
    
    [Benchmark]
    public MyImmutableObject ImmutableEdit()
        => _immutableInstance with
        {
            Name = "Second"
        };
    
    Method Mean Error StdDev Ratio RatioSD Gen0 Allocated Alloc Ratio
    MutableEdit 1.080 ns 0.0876 ns 0.1439 ns 1.02 0.19 - - NA
    ImmutableEdit 8.282 ns 0.2287 ns 0.3353 ns 7.79 1.03 0.0076 32 B NA

    Even for the most basic edit operation, immutable copying is slower by more than 7 times, and (obviously) allocates more memory, which translates to more cost to be spent on garbage collection later.

    Let’s scale it up to a slightly-more realistic immutable data structure.

    public class MyMutableParentObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    
        public required MyMutableChildObject Child { get; set; }
    }
    
    public class MyMutableChildObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    
        public required MyMutableGrandchildObject FirstGrandchild { get; set; }
                
        public required MyMutableGrandchildObject SecondGrandchild { get; set; }
                
        public required MyMutableGrandchildObject ThirdGrandchild { get; set; }
    }
    
    public class MyMutableGrandchildObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    }
    
    public record MyImmutableParentObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    
        public required MyImmutableChildObject Child { get; set; }
    }
    
    public record MyImmutableChildObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    
        public required MyImmutableGrandchildObject FirstGrandchild { get; set; }
                
        public required MyImmutableGrandchildObject SecondGrandchild { get; set; }
                
        public required MyImmutableGrandchildObject ThirdGrandchild { get; set; }
    }
    
    public record MyImmutableGrandchildObject
    {
        public required ulong Id { get; set; }
    
        public required string Name { get; set; }
    }
    
    _immutableTree = new()
    {
        Id      = 1,
        Name    = "Parent",
        Child   = new()
        {
            Id                  = 2,
            Name                = "Child",
            FirstGrandchild     = new()
            {
                Id      = 3,
                Name    = "First Grandchild"
            },
            SecondGrandchild    = new()
            {
                Id      = 4,
                Name    = "Second Grandchild"
            },
            ThirdGrandchild     = new()
            {
                Id      = 5,
                Name    = "Third Grandchild"
            },
        }
    };
    
    _mutableTree = new()
    {
        Id      = 1,
        Name    = "Parent",
        Child   = new()
        {
            Id                  = 2,
            Name                = "Child",
            FirstGrandchild     = new()
            {
                Id      = 3,
                Name    = "First Grandchild"
            },
            SecondGrandchild    = new()
            {
                Id      = 4,
                Name    = "Second Grandchild"
            },
            ThirdGrandchild     = new()
            {
                Id      = 5,
                Name    = "Third Grandchild"
            },
        }
    };
    
    [Benchmark(Baseline = true)]
    public MyMutableParentObject MutableEdit()
    {
        _mutableTree.Child.SecondGrandchild.Name = "Second Grandchild Edited";
    
        return _mutableTree;
    }
    
    [Benchmark]
    public MyImmutableParentObject ImmutableEdit()
        => _immutableTree with
        {
            Child = _immutableTree.Child with
            {
                SecondGrandchild = _immutableTree.Child.SecondGrandchild with
                {
                    Name = "Second Grandchild Edited"
                }
            }
        };
    
    Method Mean Error StdDev Ratio RatioSD Gen0 Allocated Alloc Ratio
    MutableEdit 1.129 ns 0.0840 ns 0.0825 ns 1.00 0.10 - - NA
    ImmutableEdit 32.685 ns 0.8503 ns 2.4534 ns 29.09 2.95 0.0306 128 B NA

    Not only is performance worse, but it drops off exponentially, as you scale out the size of your immutable structures.


    Now, all this being said, I myself use the immutable object pattern FREQUENTLY, in both C# and JavaScript. There’s a lot of problems you encounter in business logic that it solves really well, and it’s basically the ideal type of data structure for use in reactive programming, which is extremely effective for building GUIs. In other words, I use immutable objects a ton when I’m building out the business layer of a UI, where data is king. If I were writing code within any of the frameworks I use to BUILD those UIs (.NET, WPF, ReactiveExtensions) you can bet I’d be using immutable objects way more sparingly.

  • 4am@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    2 months ago

    There are times when immutable objects are absolutely the way to go from a data safety perspective. And there are other times when speed or practicality prevail.

    Never become an extremist about any particular pattern. They’re all useful - to become a master you must learn when that is.

  • Shanmugha@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Logical and human friendly answer: mutable objects are not a problem, poorly designed code is

    Personal rant: why even bother with objects, just use strings, ints, floats, arrays and hashmaps (sarcascm. I have spent hours uncovering logic of large chunks of code with no declaration of what function expects and produces what)

    And also, seeing endless create-object-from-data-of-other-object several times has made me want to punch the author of that code in the face. Even bare arrays and hashmaps were less insane than that clusterfuck

  • Michal@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Because recreating entire object just to make a single change is dumb.

    God help you if you’ve already passed the object by reference and have to chase up all the references to point at the new version!

    • sudo@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      2 months ago

      You can safely do a shallow copy and re-use references to the unchanged members if you have some guarantee that those members are also immutable. Its called Persistent Data Structures. But that’s a feature of the language and usually necessitates a GC.