• deegeese@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    11 hours ago

    If you’re using a library to handle deserialization , the ugliness of the serial format doesn’t matter that much.

    Just call yaml.load() and forget about it.

    • BodilessGaze@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      9 hours ago

      That works until you realize your calculations are all wrong due to floating point inaccuracies. YAML doesn’t require any level of precision for floats, so different parsers on a document may give you different results.

      • deegeese@sopuli.xyz
        link
        fedilink
        arrow-up
        0
        ·
        9 hours ago

        What text based serialization formats do enforce numeric precision?

        AFAIK it’s always left up to the writer (serializer)

        • BodilessGaze@sh.itjust.works
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          8 hours ago

          Cuelang: https://cuelang.org/docs/reference/spec/#numeric-values

          Implementation restriction: although numeric values have arbitrary precision in the language, implementations may implement them using an internal representation with limited precision. That said, every implementation must:

          • Represent integer values with at least 256 bits.
          • Represent floating-point values with a mantissa of at least 256 bits and a signed binary exponent of at least 16 bits.
          • Give an error if unable to represent an integer value precisely.
          • Give an error if unable to represent a floating-point value due to overflow.
          • Round to the nearest representable value if unable to represent a floating-point value due to limits on precision. These requirements apply to the result of any expression except for builtin functions, for which an unusual loss of precision must be explicitly documented.
  • fibojoly@sh.itjust.works
    link
    fedilink
    arrow-up
    0
    ·
    12 hours ago

    I’m amazed at developers who don’t grasp that you don’t need to have absolutely everything under the sun in a human readable file format. This is such a textbook case…

    • FuckBigTech347@lemmygrad.ml
      link
      fedilink
      arrow-up
      0
      ·
      5 hours ago

      Exactly. All modern CPUs are so standardized that there is little reason to store all the data in ASCII text. It’s so much faster and less complicated to just keep the raw binary on disk.

    • chaospatterns@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 hours ago

      Yeah this isn’t even human readable even when it’s in YAML. What am I going to do? Read the floats and understand that the person looked left?

    • marcos@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      10 hours ago

      Even if you want it to be human readable, you don’t need to include the name into every field and use balanced separators.

      Any CSV variant would be an improvement already.

      • fibojoly@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        6 hours ago

        Even using C#'s decimal type (128bit) would be an improvement! I count 22 characters per numbers here. So a minimum of 176bit.

  • wise_pancake@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    12 hours ago

    I’d probably just use line delimited JSON or CSV for this use case. It plays nicely with cat and other standard tools and basically all the yaml is doing is wrapping raw json and adding extra parse time/complexity.

    In the end consider converting this to parquet for analysis, you probably won’t get much from compression or row-group clustering, but you will get benefits from the column store format when reading the data.

    • qaz@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      11 hours ago

      Thanks for the advice, but this is just the format of some eyetracking software I had to use not something I develop myself

    • BodilessGaze@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      9 hours ago

      YAML doesn’t require any level of accuracy for floating point numbers, and that doc appears to have numbers large enough to run into problems for single-precision floats (maybe double too). That means different parsers could give you different results.

  • slackness@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    16 hours ago

    Fuck yaml. I’m not parsing data structured with spaces and newlines with my eyes. Use visible characters.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    Maybe use a real database for that? I’m a fan of simple tools (e.g. plaintext) for simple usecases but please use appropriate tools.

    • nous@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      16 hours ago

      What is wrong with a file for this? Sounds more like a local log or debug output that a single thread in a single process would be creating. A file is fine for high volume append only data like this. The only big issue is the format of that data.

      What benefit would a database bring here?

      • Azzu@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        0
        ·
        15 hours ago

        Because this is not log or debug data as OP said. In any case, what do you think would happen with this data? It will be analyzed by some sort of tool because no one could manually look at this much text data. In text, this can be like 1MB of data per second. So in a normal eye tracking session, probably hundreds of MB. The problem isn’t the storage space, but the time it will take to read that in and analyze it each time, forcing you to wait for processing or use lots of memory while reading it. And anyway, in most languages, it’s actually much easier to store the number values directly (in 8 bytes not the 30something this text representation uses) than to convert them to JSON, all languages have some built-in way to do that. And even if not, sqlite is piss-easy and does everything for you, being as simple as JSON.

        There is just no reason to do it like that unless you just don’t think about what you’re doing or have no clue.

      • towerful@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        16 hours ago

        Smaller file size, lower data rate, less computational overhead, no conversion loss.

        A 64 bit float requires 64 bits to store.
        ASCII representation of a 64 bit float (in the example above) is 21 characters or 168 bits.
        Also, if every record is the same then there is a huge overhead for storing the name of each value. Plus the extra spaces, commas and braces.
        So, you are at least doubling the file size and data throughput. And there is precision loss when converting float-string-float. Plus the computational overhead of doing those conversions.

        Something like sqlite is lightweight, fast and will store the native data types.
        It is widely supported, and allows for easy querying of the data.
        Also makes it easy for 3rd party programs to interact with the data.

        If you are ever thinking of implementing some sort of data storage in files, consider sqlite first.

      • NeatNit@discuss.tchncs.de
        link
        fedilink
        arrow-up
        0
        ·
        16 hours ago

        I think SQLite is a great middle ground. It saves the database as a single .db file, and can do everything an SQL database can do. Querying for data is a lot more flexible and a lot faster. The tools for manipulating the data in any way you want are very good and very robust.

        However, I’m not sure how it would affect file size. It might be smaller because JSON/YAML wastes a lot of characters on redundant information (field names) and storing numbers as text, which the database would store as binary data in a defined structure. On the other hand, extra space is used to make common SQL operations happen much faster using fancy data structures. I don’t know which effect is greater so file size could be bigger or smaller.

        • GenderNeutralBro@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          9 hours ago

          SQLite would definitely be smaller, faster, and require less memory.

          Thing is, it’s 2025, roughly 20 years since anybody’s given half a shit about storage efficiency, memory efficiency, or even CPU efficiency for anything so small. Presumably this is not something they need to query dynamically.

          • NeatNit@discuss.tchncs.de
            link
            fedilink
            arrow-up
            0
            ·
            8 hours ago

            True (in most contexts, probably including this one), but I think that only makes the case for SQLite stronger. What people do still care about is a good flexible, usable and reliable interface. I’m not sure how to get that with YAML.

        • Scrath@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          12 hours ago

          I didn’t look to much at the data but I think csv might actually be an appropriate format for this?

          Nice simple plaintext and very easy to parse into a datastructure for analysing/using it in python or similar

          • nous@programming.dev
            link
            fedilink
            English
            arrow-up
            0
            ·
            7 hours ago

            CSV would be fine. The big problem with the data as presented is it is a YAML list, so needs the whole file to be read into memory and decoded before you get and values out of it. Any line based encoding would be vastly better and allow line based processing to be done. CSV, json objects encoded into a single line, some other streaming binary format. Does not make much difference overall as long as it is line based or at least streamable.

  • BestBouclettes@jlai.lu
    link
    fedilink
    arrow-up
    0
    ·
    17 hours ago

    I really like YAML but way too many people use it beyond its purpose… I work with Gitlabci and seeing complex bash scripts inline in YAML files makes me want to hurt people.